Efficient Android Threading



Application Architecture

The cornerstones of an application are the • Application object and the Android components: • Activity, • Service, • BroadcastReceiver, and • ContentProvider.

Application

The representation of an executing application in Java is the android.app.Application object, which is instantiated upon application start and destroyed when the application stops (i.e., an instance of the Application class lasts for the lifetime of the Linux process of the application).

Components

They have different responsibilities and lifecycles, but they all represent application entry points, where the application can be started. Once a component is started, it can trigger another component, and so on, throughout the application’s lifecycle. A component is trigged to start with an Intent, either within the application or between applications. !!! An application implements a component by subclassing it, and all components in an application must be registered in the AndroidManifest.xml file.

Activity

An Activity is a screen—almost always taking up the device’s full screen—shown to the user. When the user navigates between screens, Activity instances form a stack. Navigation to a new screen pushes an Activity to the stack, whereas backward navigation causes a corresponding pop.
• Active in the foreground: D • Paused and partly visible: C • Stopped and invisible: B • Inactive and destroyed: A An Activity lifecycle ends either when the user navigates back—for example, presses the back button—or when the Activity explicitly calls finish().

Service

A Service can execute invisibly in the background without direct user interaction. A Service can be executed in either a started or a bound mode:

Started Service

The Service is started with a call to Context.startService(Intent)

Bound Service

Multiple components can bind to a Service through Context.bindService(In tent, ServiceConnection, int). After the binding, a component can interact with the Service through the Service Connection interface.

ContentProvide

An application that wants to share substantial amounts of data within or between applications can utilize a ContentProvider. It can provide access to any data source, but it is most commonly used in collaboration with SQLite databases, which are always private to an application. With the help of a ContentProvider, an application can publish that data to applications that execute in remote processes.

BroadcastReceiver

This component has a very restricted function: it listens for intents sent from within the application, remote applications, or the platform. It filters incoming intents to determine which ones are sent to the BroadcastReceiver. A BroadcastReceiver should be registered dynamically when you want to start listening for intents, and unregistered when it stops listening. If it is statically registered in the AndroidManifest, it listens for intents while the application is installed. Thus, the BroadcastReceiver can start its associated application if an Intent matches the filter.

Application Execution

By default, applications and processes have a one-to-one relationship, but if required, it is possible for an application to run in several processes, or for several applications to run in the same process. The application lifecycle is encapsulated within its Linux process, which, in Java, maps to the android.app.Application class.
The Application object for each app starts when the runtime calls its onCreate() method. Ideally, the app terminates with a call by the runtime to its onTerminate(), but an application cannot rely upon this. The underlying Linux process may have been killed before the runtime had a chance to call onTerminate(). The Application object is the first component to be instantiated in a process and the last to be destroyed. An application is started when one of its components is initiated for execution. 1. Start Linux process. 2. Create runtime. 3. Create Application instance. 4. Create the entry point component for the application. A process is created at the start of the application and finishes when the system wants to free up resources. Because a user may request an application at any later time, the runtime avoids destroying all its resources until the number of live applications leads to an actual shortage of resources across the system. Hence, an application isn’t auto‐ matically terminated even when all of its components have been destroyed. When the system is low on resources, it’s up to the runtime to decide which process should be killed. With the highest first, the process ranks are: Foreground Application has a visible component in front, Service is bound to an Activity in front in a remote process, or BroadcastReceiver is running.

Visible Application has a visible component but is partly obscured.

Service Service is executing in the background and is not tied to a visible component.

Background A nonvisible Activity. This is the process level that contains most applications.

Empty A process without active components. Empty processes are kept around to improve startup times, but they are the first to be terminated when the system reclaims resources. In practice, the ranking system ensures that no visible applications will be terminated by the platform when it runs out of resources. Long operations should be handled on a background thread. Long-running tasks typically include: • Network communication • Reading or writing to a file • Creating, deleting, and updating elements in databases • Reading or writing to SharedPreferences • Image processing • Text parsing

Multithreading in Java

A CPU can process instructions from one thread at a time, but a system normally has multiple threads that require processing at the same time, such as a system with multiple simultaneously running applications. For the user to perceive that applications can run in parallel, the CPU has to share its processing time between the application threads. The sharing of a CPU’s processing time is handled by a scheduler. Two concurrently running threads—executed by a single processor—are split into ex‐ ecution intervals:
Threads come with an overhead in terms of memory and processor usage. Each thread allocates a private memory area that is mainly used to store method local variables and parameters during the execution of the method. The private memory area is allocated when the thread is created and deallocated once the thread terminates (i.e., as long as the thread is active, it holds on to system resources—even if it is idle or blocked). The intrinsic lock acts as a monitor:
The Java monitor can be modeled with three states: Blocked Threads that are suspended while they wait for the monitor to be released by another thread. Executing The one and only thread that owns the monitor and is currently running the code in the critical section. Waiting Threads that have voluntarely given up ownership of the monitor before it has reached the end of the critical section. The threads are waiting to be signalled before they can take ownership again. Acquire the lock. If there is no other thread that owns the monitor, a blocked thread can take ownership and execute in the critical section. If there is more than one blocked thread, the scheduler selects which thread to execute. Release the lock and wait. The thread suspends itself through Object.wait() be‐ cause it wants to wait for a condition to be fulfilled before it continues to execute. Acquire the lock after signal. Waiting threads are signalled from another thread through Object.notify() or Object.notifyAll() and can take ownership of the monitor again if selected by the scheduler. However, the waiting threads have no precedence over potentially blocked threads that also want to own the monitor.

Using explicit locking mechanisms

If a more advanced locking strategy is needed, ReentrantLock or ReentrantReadWriteLock classes can be used instead of the synchronized keyword. Critical sections are protected by explicitly locking and unlocking regions in the code: int sharedResource; private ReentrantLock mLock = new ReentrantLock(); public void changeState() { mLock.lock(); try { sharedResource++; } finally { mLock.unlock(); } } The synchronized keyword and ReentrantLock have the same semantics: they both block all threads trying to execute a critical section if another thread has already entered that region.

Concurrent Execution Design

Basic principles include: • Favoring reuse of threads instead of always creating new threads, so that the fre‐ quency of creation and teardown of resources can be reduced. • Not using more threads than required. The more threads that are used, the more memory and processor time is consumed.

CHAPTER 3 Threads on Android

All application threads are based on the native pthreads in Linux with a Thread representation in Java. From an application perspective, the thread types are: • UI • binder • background threads. The UI thread is the main thread of the application, used for executing Android components and updating the UI elements on the screen. If the platform detects that UI updates are attempted from any other thread, it will promptly notify the application by throwing a CalledFromWrongThreadException. This harsh platform be‐ havior is required because the Android UI Toolkit is not thread safe, so the runtime allows access to the UI elements from one thread only. Binder threads are used for communicating between threads in different processes. All the threads that an application explicitly creates are background threads. The background threads are descendants of the UI thread, so they inherit the UI thread properties, such as its priority. In the application, the use cases for the UI thread and worker threads are quite different, but in Linux they are both plain native threads and are handled equally. The constraints on the UI thread—that it should handle all UI updates—are enforced by the Window Manager in the Application Framework and not by Linux.

The Linux Process and Threads

Each running application has an underlying Linux process, which has the following properties: • User ID (UID) A process has a unique user identifier that represents a user on a Linux system. Linux is a multiuser system, and on Android, each application represents a user in this system. When the application is installed, it is assigned a user ID. • Process identifier (PID) A unique identifier for the process. • Parent process identifier (PPID) After system startup, each process is created from another process. The running system forms a tree hierarchy of the running processes. Hence, each application process has a parent process. For Android, the parent of all processes is the Zygote. • Stack Local function pointers and variables. область памяти, функционирующая по принципу LIFO, выделяемая программе для временного хранения адресов возврата при вызовах функций и процедур, передаваемых параметров и локальных переменныхHeap The address space allocated to a process. The address space is kept private to a process and can’t be accessed by other processes. хип, "куча" в программировании - область динамически распределяемой [оперативной] памяти для структур данных, размер которых не может быть определён до момента исполнения программы

Threads and processes are very much alike, with the difference between them coming in the sharing of resources. An important distinction between processes and threads is that processes don’t share address space with each other, but threads share the address space within a process. This memory sharing makes it a lot faster to communicate between threads than between processes. When a process starts, a single thread is automatically created for that process. A process always contains at least one thread to handle its execution. In Android, the thread created automatically in a process is the one we’ve already seen as the UI thread.

Scheduling

Linux treats threads and not processes as the fundamental unit for execution. Hence, scheduling on Android concerns threads and not processes. Scheduling allocates execution time for threads on a processor. The scheduler decides which thread should execute and for how long it should be allowed to execute before it picks a new thread to execute and a context switch occurs. In Android, the application threads are scheduled by the standard scheduler in the Linux kernel and not by the Dalvik virtual machine. In practice, this means that the threads in our application are competing not only directly with each other for execution time, but also against all threads in all the other applications.

The platform mainly has two ways of affecting the thread scheduling:

1) Change the Linux thread priority. 2) Change the Android-specific control group. An application can change priority of threads from two classes: java.lang.Thread setPriority(int priority); android.os.Process Process.setThreadPriority(int priority); // Calling thread. Process.setThreadPriority(int threadId, int priority); // Thread with specific id.

Control groups

Android defines multiple control groups, but the most important ones for applications are the Foreground Group and Background Group. Threads in the Foreground Group are allocated a lot more execution time than threads in the Background Group, and Android utilizes this to ensure that visible applications on the screen get more processor allocation than applications that are not visible on the screen. If an application runs at the Foreground or Visible process level, the threads created by that application will belong to the Foreground Group and receive most of the total processing time, while the remaining time will be divided among the threads in the other applications. Lowering the priority of a thread with Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND) will not only reduce the priority but also ensure that this thread is decoupled from the pro‐ cess level of the application and always put in the Background Group.

CHAPTER 4 Thread Communication

Pipes

Pipes are a part of the java.io package. That is, they are general Java functionality and not Android specific. A pipe provides a way for two threads, within the same process, to connect and establish a one-way data channel. A producer thread writes data to the pipe, whereas a consumer thread reads data from the pipe. The pipe itself is a circular buffer allocated in memory, available only to the two connected threads. No other threads can access the data. The pipe is also one-directional, permitting just one thread to write and the other to read (Figure 4-1).
Pipes are typically used when you have two long-running tasks and one has to offload data to another continuously. A pipe can transfer either binary or character data. Binary data transfer is represented by PipedOutputStream (in the producer) and PipedInputStream (in the consumer), whereas character data transfer is represented by PipedWriter (in the producer) and PipedReader (in the consumer). Basic Pipe Use 1. Set up the connection: PipedReader r = new PipedReader(); PipedWriter w = new PipedWriter(); w.connect(r); Here, the connection is established by the writer connecting to the reader. The default buffer size is 1024 but is configurable from the consumer side of the pipe, as shown later: PipedReader r = new PipedReader(1024*4); 2. Pass the reader to a processing thread: Thread t = new MyReaderThread(r); t.start(); After the reader thread starts, it is ready to receive data from the writer. 3. Transfer data: // Producer thread: Flush the pipe after a write. w.write('A'); w.flush(); // Consumer thread: Read the data in a loop. int i; while((i = reader.read()) != -1){ char c = (char) i; // Handle received data } Calling flush() after a write to the pipe notifies the consumer thread that new data is available. 4. Close the connection. When the communication phase is finished, the pipe should be disconnected: // Producer thread: Close the writer. w.close(); // Consumer thread: Close the reader. r.close(); If the writer and reader are connected, it’s enough to close only one of them. !!! Be careful when involving the UI thread with pipes, due to the possible blocking of calls if the pipe is either full (producer blocks on its write() call) or empty (consumer blocks on its read() call).

Shared Memory

Shared memory (using the memory area known in programming as the heap) is a common way to pass information between threads. All threads in an application can access the same address space within the process. Hence, if one thread writes a value on a variable in the shared memory, it can be read by all the other threads. If a thread stores data as a local variable, no other thread can see it. Objects are stored in the shared memory if they are scoped as one of the following: • Instance member variables • Class member variables • Objects declared in methods The object is accessible from multiple threads only if the method publishes the reference outside the method scope, for example, by passing the reference to another object’s method.

Signaling

While threads are communicating through the state variables on the shared memory, they could poll the state value to fetch changes to the state. But a more efficient mechanism is the Java library’s built-in signaling mechanism that lets a thread notify other threads of changes in the state. The signaling mechanism varies depending on the synchronization type:
If the shared state is protected with synchronization on the intrinsic lock, check the condition before calling wait(): synchronized(this) { while(isConditionFulfilled == false) { wait(); } // When the execution reaches this point, // the state is correct. } This pattern checks whether the condition predicate is fulfilled. If not, the thread blocks by calling wait(). When another thread notifies on the monitor and the waiting thread wakes up, it checks again whether the condition has been fulfilled and, if not, it blocks again, waiting for a new signal.

BlockingQueue

The BlockingQueue acts as the coordinator between the producer and consumer threads, wrapping a list implementation together with thread signaling. The list contains a configurable number of elements that the producing threads fill with arbitrary data messages. On the other side, the consumer threads extract the messages in the order that they were enqueued and then process them.
The consumer-producer pattern implemented with the LinkedBlockingQueueimplementation is easily implemented by adding messages to the queue with put(), and removing them with take(), where put() blocks the caller if the queue is full, and take() blocks the caller if the queue is empty: public class ConsumerProducer { private final int LIMIT = 10; private BlockingQueue blockingQueue = new LinkedBlockingQueue(LIMIT); public void produce() throws InterruptedException { int value = 0; while (true) { blockingQueue.put(value++); } } public void consume() throws InterruptedException { while (true) { int value = blockingQueue.take(); } } }

Android Message Passing

The mechanisms — pipes, shared memory, and blocking queues — apply to Android applications but impose problems for the UI thread because of their tendency to block. Android platform defines its own message passing mechanism for communication between threads. The UI thread can offload long tasks by sending data messages to be processed on background threads. The message passing mechanism is a nonblocking consumer-producer pattern, where neither the producer thread nor the consumer thread will block during the message handoff. The message handling mechanism is fundamental in the Android platform and the API is located in the android.os package, with a set of classes shown in Figure 4-4 that implement the functionality.
android.os.Looper A message dispatcher associated with the one and only consumer thread. android.os.Handler Consumer thread message processor, and the interface for a producer thread to insert messages into the queue. A Looper can have many associated handlers, but they all insert messages into the same queue. android.os.MessageQueue Unbounded linked list of messages to be processed on the consumer thread. Every Looper—and Thread—has at most one MessageQueue. android.os.Message Message to be executed on the consumer thread. Messages are inserted by producer threads and processed by the consumer thread, as illustrated in Figure 4-5.
1. Insert: The producer thread inserts messages in the queue by using the Handler connected to the consumer thread. 2. Retrieve: The Looper runs in the consumer thread and retrieves messages from the queue in a sequential order. 3. Dispatch: The handlers are responsible for processing the messages on the consumer thread. A thread may have multiple Handler instances for processing messages; the Looper ensures that messages are dispatched to the correct Handler.

Classes Used in Message Passing

MessageQueue

Figure 4-6 illustrates a message queue with three pending messages, sorted with timestamps where t1 < t2 < t3. Only one message has passed the dispatch barrier, which is the current time. Messages eligible for dispatch have a timestamp value less than the current time (represented by “Now” in the figure).
If no message has passed the dispatch barrier when the Looper is ready to retrieve the next message, the consumer thread blocks. Execution is resumed as soon as a message passes the dispatch barrier. The producers can insert new messages in the queue at any time and on any position in the queue. The insert position in the queue is based on the timestamp value. If a new message has the lowest timestamp value compared to the pending messages in the queue, it will occupy the first position in the queue, which is next to be dispatched. Insertions always conform to the timestamp sorting order.

MessageQueue.IdleHandler

If there is no message to process, a consumer thread has some idle time. For instance, Figure 4-7 illustrates a time slot where the consumer thread is idle. By default, the consumer thread simply waits for new messages during idle time; but instead of waiting, the thread can be utilized to execute other tasks during these idle slots. This feature can be utilized to let noncritical tasks postpone their execution until no other messages are competing for execution time.
An application gets hold of this time slot with the android.os.MessageQueue.IdleHandler-interface, a listener that generates callbacks when the consumer thread is idle. The listener is attached to the MessageQueue and detached from it through the following calls: // Get the message queue of the current thread. MessageQueue mq = Looper.myQueue(); // Create and register an idle listener. MessageQueue.IdleHandler idleHandler = new MessageQueue.IdleHandler(); mq.addIdleHandler(idleHandler); // Unregister an idle listener. mq.removeIdleHandler(idleHandler); The idle handler interface consists of one callback method only: interface IdleHandler { boolean queueIdle(); } When the message queue detects idle time for the consumer thread, it invokes queueI dle() on all registered IdleHandler-instances. It is up to the application to implement the callback responsibly. You should usually avoid long-running tasks because they will delay pending messages during the time they run. The implementation of queueIdle() must return a Boolean value with the following meanings: true The idle handler is kept active; it will continue to receive callbacks for successive idle time slots. false The idle handler is inactive; it will not receive anymore callbacks for successive idle time slots. This is the same thing as removing the listener through Message Queue.removeIdleHandler(). Using IdleHandler to terminate an unused thread private boolean mIsFirstIdle = true; ... @Override public void run() { Looper.prepare(); mConsumerHandler = new Handler() { 1 @Override public void handleMessage(Message msg) { // Consume data } }; Looper.myQueue().addIdleHandler(this); 2 Looper.loop(); 3 } @Override public boolean queueIdle() { if (mIsFirstIdle) { 4 mIsFirstIdle = false; return true; 5 } mConsumerHandler.getLooper().quit(); 6 return false; } ... 1 Set up a Handler to be used by the producer for inserting messages in the queue. Here we use the default constructor so it will bind to the Looper of the current thread. Hence, this Handler can created only after Looper.prepare(), or it will have nothing to bind to. 2 Register the IdleHandler on the background thread when it is started and the Looper is prepared so that the MessageQueue is set up. 3 Start dispatching messages from the message queue to the consumer thread. This is a blocking call, so the worker thread will not finish. 4 Let the first queueIdle invocation pass, since it occurs before the first message is received. 5 Return true on the first invocation so that the IdleHandler still is registered. 6 Terminate the thread.

Message

Each item on the MessageQueue is of the android.os.Message class. This is a container object carrying either a data item or a task, never both. Data is processed by the consumer thread, whereas a task is simply executed when it is dequeued and you have no other processing to do: Data message The data set has multiple parameters that can be handed off to the consumer thread, as shown in Table 4-2.
Task message The task is represented by a java.lang.Runnable object to be executed on the consumer thread. Task messages cannot contain any data beyond the task itself. The message knows its recipient processor—i.e., Handler—and can enqueue itself through Message.sendToTarget(): Message m = Message.obtain(handler, runnable); m.sendToTarget(); A MessageQueue can contain any combination of data and task messages. The consumer thread processes them in a sequential manner, independent of the type. If a message is a data message, the consumer processes the data. Task messages are handled by letting the Runnable execute on the consumer thread, but the consumer thread does not receive a message to be processed in Handler.handleMessage(Message), as it does with data messages. The application is responsible for creating the message object using one of the following calls. Explicit object construction: Message m = new Message(); Factory methods: —Empty message: Message m = Message.obtain(); —Data message: Message m = Message.obtain(Handler h); Message m = Message.obtain(Handler h, int what); Message m = Message.obtain(Handler h, int what, Object o); Message m = Message.obtain(Handler h, int what, int arg1, int arg2); Message m = Message.obtain(Handler h, int what, int arg1, int arg2, Object o); —Task message: Message m = Message.obtain(Handler h, Runnable task); —Copy constructor: Message m = Message.obtain(Message originalMsg);

Looper

The android.os.Looper class handles the dispatch of messages in the queue to the associated handler. All messages that have passed the dispatch barrier, as illustrated in Figure 4-6, are eligible for dispatch by the Looper. As long as the queue has messages eligible for dispatch, the Looper will ensure that the consumer thread receives the messages. When no messages have passed the dispatch barrier, the consumer thread will block until a message has passed the dispatch barrier. The consumer thread does not interact with the message queue directly to retrieve the messages. Instead, a message queue is added to the thread when the Looper has been attached. The Looper manages the message queue and facilitates the dispatch of messages to the consumer thread. By default, only the UI thread has a Looper; threads created in the application need to get a Looper associated explicitly. When the Looper is created for a thread, it is connected to a message queue. The Looper acts as the intermediator between the queue and the thread. The setup is done in the run method of the thread: class ConsumerThread extends Thread { @Override public void run() { Looper.prepare(); 1 // Handler creation omitted. Looper.loop(); 2 } } 1 The first step is to create the Looper, which is done with the static prepare() method; it will create a message queue and associate it with the current thread. At this point, the message queue is ready for insertion of messages, but they are not dispatched to the consumer thread. 2 Start handling messages in the message queue. This is a blocking method that ensures the run() method is not finished; while run() blocks, the Looper dispatches messages to the consumer thread for processing. A thread can have only one associated Looper; a runtime error will occur if the application tries to set up a second one. Consequently, a thread can have only one message queue, meaning that messages sent by multiple producer threads are processed sequentially on the consumer thread. Hence, the currently executing message will postpone subsequent messages until it has been processed. Messages with long execution times shall not be used if they can delay other important tasks in the queue.

Looper termination

The Looper is requested to stop processing messages with either quit or quitSafely: quit() stops the looper from dispatching any more messages from the queue; all pending messages in the queue, including those that have passed the dispatch barrier, will be discarded. quitSafely, on the other hand, only discards the messages that have not passed the dispatch barrier. Pending messages that are eligible for dispatch will be processed before the Looper is terminated. !!! quitSafely was added in API level 18 (Jelly Bean 4.3). Previous API levels only support quit. Terminating a Looper does not terminate the thread; it merely exits Looper.loop() and lets the thread resume running in the method that invoked the loop call. But you cannot start the old Looper or a new one, so the thread can no longer enqueue or handle messages. If you call Looper.prepare(), it will throw RuntimeException because the thread already has an attached Looper. If you call Looper.loop(), it will block, but no messages will be dispatched from the queue.

The UI thread Looper

The UI thread is the only thread with an associated Looper by default. It is a regular thread, like any other thread created by the application itself, but the Looper is associated with the thread1 before the application components are initialized. There are a few practical differences between the UI thread Looper and other application thread loopers: • It is accessible from everywhere, through the Looper.getMainLooper() method. • It cannot be terminated. Looper.quit() throws RuntimeException. • The runtime associates a Looper to the UI thread by Looper.prepareMainLoop er(). This can be done only once per application. Thus, trying to attach the main looper to another thread will throw an exception.

Handler

It is a two-sided API that both handles the insertion of messages into the queue and the message processing. As indicated in Figure 4-5, it is invoked from both the producer and consumer thread typically used for: • Creating messages • Inserting messages into the queue • Processing messages on the consumer thread • Managing messages in the queue

Setup

Without a Looper, handlers cannot function; they cannot couple with a queue to insert messages, and consequently they will not receive any messages to process. Hence, a Handler instance is already bound to a Looper instance at construction time: Constructors without an explicit Looper bind to the Looper of the current thread: new Handler(); new Handler(Handler.Callback); Constructors with an explicit Looper bind to that Looper: new Handler(Looper); new Handler(Looper, Handler.Callback); If the constructors without an explicit Looper are called on a thread without a Looper (i.e., it has not called Looper.prepare()), there is nothing handlers can bind to, leading to a RuntimeException. Once a handler is bound to a Looper, the binding is final. A thread can have multiple handlers; messages from them coexist in the queue but are dispatched to the correct Handler instance, as shown in Figure 4-9.
Multiple handlers will not enable concurrent execution. The messages are still in the same queue and are processed sequentially.

Message creation

The message obtained from a Handler is retrieved from the message pool and implicitly connected to the Handler instance that requested it. This connection enables the Loop er to dispatch each message to the correct Handler.

Message insertion

The Handler inserts messages in the message queue in various ways depending on the message type. Task messages are inserted through methods that are prefixed post, whereas data insertion methods are prefixed send: Add a task to the message queue: boolean post(Runnable r) ... Add a data object to the message queue: boolean sendMessage(Message msg) ... Add simple data object to the message queue: boolean sendEmptyMessage(int what) Every message inserted in the queue comes with a time parameter indicating the time when the message is eligible for dispatch to the consumer thread. default Immediately eligible for dispatch. at_front This message is eligible for dispatch at time 0. Hence, it will be the next dispatched message, unless another is inserted at the front before this one is processed. delay The amount of time after which this message is eligible for dispatch. uptime The absolute time at which this message is eligible for dispatch. Even though explicit delays or uptimes can be specified, the time required to process each message is still indeterminate. It depends both on whatever existing messages need to be processed first and the operating system scheduling.

Removing Messages from the Queue

After enqueuing a message, the producer can invoke a method of the Handler class to remove the message, so long as it has not been dequeued by the Looper. Sometimes an application may want to clean the message queue by removing all messages, which is possible, but most often a more fine-grained approach is desired: an application wants to target only a subset of the messages. For that, it needs to be able to identify the correct messages. Therefore, messages can be identified from certain properties, as shown in Table 4-4.
The handler identifier is mandatory for every message, because a message always knows what Handler it will be dispatched to. This requirement implicitly restricts each Handler to removing only messages belonging to that Handler. It is not possible for a Handler to remove messages in the queue that were inserted by another Handler. The methods available in the Handler class for managing the message queue are: • Remove a task from the message: queue. removeCallbacks(Runnable r) removeCallbacks(Runnable r, Object token) • Remove a data message from the message queue: removeMessages(int what) removeMessages(int what, Object object) • Remove tasks and data messages from the message queue: removeCallbacksAndMessages(Object token) The Object identifier is used in both the data and task message. Hence, it can be assigned to messages as a kind of tag, allowing you later to remove related messages that you have tagged with the same Object. For instance, the following excerpt inserts two messages in the queue to make it possible to remove them later based on the tag: Object tag = new Object(); 1 Handler handler = new Handler() public void handleMessage(Message msg) { // Process message Log.d("Example", "Processing message"); } }; Message message = handler.obtainMessage(0, tag); 2 handler.sendMessage(message); handler.postAtTime(new Runnable() { public void run() { 3 // Left empty for brevity } }, tag, SystemClock.uptimeMillis()); handler.removeCallbacksAndMessages(tag); 4 1 The message tag identifier, common to both the task and data message. 2 The object in a Message instance is used both as data container and implicitly defined message tag. 3 Post a task message with an explicitly defined message tag. 4 Remove all messages with the tag. As indicated before, you have no way to find out whether a message was dispatched and handled before you issue a call to remove it. Once the message is dispatched, the producer thread that enqueued it cannot stop its task from executing or its data from being processed.

Communicating with the UI Thread

The UI thread is the only thread in an application that has an associated Looper by default, which is associated on the thread before the first Android component is started. The UI thread can be a consumer, to which other threads can pass messages. It’s important to send only short-lived tasks to the UI thread. The UI thread is application global and processes both Android component and system messages sequentially. Hence, long-lived tasks will have a global impact across the application. Messages are passed to the UI thread through its Looper that is accessible globally in the application from all threads with Looper.getMainLooper(): Runnable task = new Runnable() {...}; new Handler(Looper.getMainLooper()).post(task); If it is the UI thread that posts the message to itself, the message can be processed at the earliest after the current message is done: // Method called on UI thread. private void postFromUiThreadToUiThread() { new Handler().post(new Runnable() { ... }); // The code at this point is part of a message being processed // and is executed before the posted message. } However, a task message that is posted from the UI thread to itself can bypass the message passing and execute immediately within the currently processed message on the UI thread with the convenience method Activity.runOnUiThread(Runnable): // Method called on UI thread. private void postFromUiThreadToUiThread() { runOnUiThread(new Runnable() { ... }); // The code at this point is executed after the message. } The runOnUiThread method can only be executed from an Activity instance, but the same behavior can be implemented by tracking the ID of the UI thread, for example, with a convenience method customRunOnUiThread in an Application subclass. The custom RunOnUiThread inserts a message in the queue like the following example: public class EatApplication extends Application { private long mUiThreadId; private Handler mUiHandler; @Override public void onCreate() { super.onCreate(); mUiThreadId = Thread.currentThread().getId(); mUiHandler = new Handler(); } public void customRunOnUiThread(Runnable action) { if (Thread.currentThread().getId() != mUiThreadId) { mUiHandler.post(action); } else { action.run(); } } }

CHAPTER 5 Interprocess Communication

Android application threads most often communicate within a process, sharing the process’s memory, as discussed in Chapter 4. However, communication across process boundaries—i.e., interprocess communication (IPC)—is supported by the Android platform through the binder framework, which manages the data transactions when there is no shared memory area between the threads. The most common IPC use cases are handled by high-level components in Android, such as intents, system services, and content providers. They can be used by an application without it having to know whether it communicates within the process or between processes. Sometimes, however, it is necessary for an application to define a more explicit communication model and be more involved in the actual communication.

Android RPC (remote procedure calls)

IPC is managed by the Linux OS, which supports several IPC techniques: signals, pipes, message queues, semaphores, and shared memory. In Android’s modified Linux kernel, the Linux IPC techniques have been replaced by the binder framework, which enables an RPC mechanism between processes; a client process can call remote methods in a server process as if the methods were executed locally. Hence, data can be passed to the server process, executed on a thread, and return a result value to the calling thread. The RPC method call itself is trivial. The Android application framework and core libraries abstract out the process communication with the binder framework and the Android Interface Definition Language (AIDL).

Binder

The binder enables applications to transfer both functions and data—method calls—between threads running in different processes. The server process defines a remote interface supported by the android.os.Binder class, and threads in a client process can access the remote interface through this remote object. A remote procedure call that transfers both a function and data is called a transaction; the client process calls the transact method, and the server process receives the call in the onTransact method (Figure 5-1).
The client thread calling transact is blocked by default until onTransact has finished executing on the remote thread. Transaction data consists of android.os.Parcel objects, which are optimized to be sent across processes via the Binder. The onTransact method is executed on a thread from a pool of binder threads.This pool exists only to handle incoming requests from other processes. It has a maximum of 16 threads, so 16 remote calls can be handled concurrently in every process. IPC can be bidirectional. Binders also support asynchronous transactions, which you can specify by setting IBinder.FLAG_ONEWAY. With that flag set, the client thread will call transact and return immediately. A binder will continue calling onTransact on the binder thread in the server process, but cannot return any data synchronously to the client thread.

AIDL

When a process wants to expose functionality for other processes to access, it has to define the communication contract. Basically, the server defines an interface of methods that clients can call. The simplest and most common way to describe the interface in the Android Interface Definition Language (AIDL)—defined in an .aidl.file. Compilation of the AIDL file generates Java code that supports IPC.

Asynchronous RPC

Asynchronous methods must return void, and must not have arguments declared out or inout. To retrieve results, the implementation will use a callback. Asynchronous RPC is defined in AIDL with the oneway keyword. It can be applied at either the interface level or on an individual method: Asynchronous interface All methods are executed asynchronously: oneway interface IAsynchronousInterface { void method1(); void method2(); } Asynchronous method The method is executed asynchronously: interface IAsynchronousInterface { oneway void method1(); void method2(); } The simplest form of asynchronous RPC defines a callback interface in the method call. The callback is a reverse RPC, such as a call from the server to the client. Thus, the callback interface is also defined in AIDL. package com.example.IPC; import com.example.IPC.IAsynchronousCallback; interface IAsynchronous1 { oneway void getThreadNameSlow(IAsynchronousCallback callback); } The callback interface is declared in AIDL as follows: package com.example.IPC; interface IAsynchronousCallback { void handleResult(String name); } The implementation of the remote interface in the server process follows. At the end of the method, the result is returned in the callback method: IAsynchronous1.Stub mIAsynchronous1 = new IAsynchronous1.Stub() { @Override public void getThreadNameSlow(IAsynchronousCallback callback) throws RemoteException { // Simulate a slow call String threadName = Thread.currentThread().getName(); SystemClock.sleep(10000); callback.handleResult(threadName); } }; @Override public IBinder onBind(final Intent intent) { return mIAsynchronous1.asBinder(); } And the implementation of the callback interface in the client process handles the result: private IAsynchronous1 serverInterface; @Override public void onCreate(Bundle savedInstanceState) { ... serviceIntent = new Intent(this, MyService.class); bindService(serviceIntent, mConnection, Service.BIND_AUTO_CREATE); } ServiceConnection mConnection = new ServiceConnection() { @Override public void onServiceConnected(final ComponentName name, final IBinder service) { serverInterface = IAsynchronous1.Stub.asInterface(service); try { serverInterface.getThreadNameSlow(mCallback); } catch (RemoteException e) { e.printStackTrace(); } } @Override public void onServiceDisconnected(final ComponentName name) { } }; private IAsynchronousCallback.Stub mCallback = new IAsynchronousCallback.Stub() { @Override public void handleResult(String remoteThreadName) throws RemoteException { // Handle the callback Log.d(TAG, "remoteThreadName = " + name); Log.d(TAG, "currentThreadName = " + Thread.currentThread().getName()); } } Note that both thread names—remote and current—are printed as “Binder_1”, but they belong to different binder threads, from the client and server process, respectively. Note: To indicate another process in Manifest: service android:name=".MyService" android:process=":myprocess"

Message Passing Using the Binder

The Android platform provides a flexible kind of interthread communication through message passing. However, it requires that the threads execute in the same process, because the Message objects are located in the memory shared by the threads. If the threads execute in different processes, they do not have any common memory for sharing messages; instead, the messages have to be passed across process boundaries, using the binder framework. For this purpose, you can use the android.os.Messenger class to send messages to a dedicated Handler in a remote process. The Handler is not sent across processes; instead, the Messenger acts as the intermediary. Figure 5-4 shows the elements of message passing between processes. A Message can be sent to a thread in another process with the Messenger, but the sending process (the client) has to retrieve a Messenger reference from the receiving process (server).

Two-Way Communication

A Message passed across processes keeps a reference to a Messenger in the originating process in the Message.replyTo argument, which is one of the data types that a Message can carry. This reference can be used to create a two-way communication mechanism between two threads in different processes. The following code example illustrates two-way communication between an Activity and a Service executing in different processes. The Activity sends a Message, with a replyTo argument, to a remote Service: ... private Messenger mRemoteService = null; ... public void onServiceConnected(ComponentName className, IBinder service) { mRemoteService = new Messenger(service); mBound = true; } ... public void onSendClick(View v) { if (mBound) { try { Message msg = Message.obtain(null, 1, 0, 0); msg.replyTo = new Messenger(new Handler() { 1 @Override public void handleMessage(Message msg) { Log.d(TAG, "Message sent back - msg.what = " + msg.what); } }); mRemoteService.send(msg); } catch (RemoteException e) { Log.e(TAG, e.getMessage()); } } } 1 Create a Messenger that is passed to the remote service. The Messenger holds a Handler reference to the current thread that executes messages from other processes. The Service receives the Message and sends a new Message back to the Activity: public void run() { Looper.prepare(); mWorkerHandler = new Handler() { @Override public void handleMessage(Message msg) { switch (msg.what) { case 1: try { msg.replyTo.send(Message.obtain(null, msg.what, 0, 0)); } catch (RemoteException e) { Log.e(TAG, e.getMessage()); } break; } } }; onWorkerPrepared(); Looper.loop(); } } The Messenger is coupled with a Handler that processes messages on the thread it belongs to. Hence, task execution is sequential by design, compared to AIDL that can execute tasks concurrently on binder threads. RPC is preferred if you want to improve performance by handling incoming requests concurrently. If not, Messenger is an easier approach to implement, but its execution is single threaded.

CHAPTER 6 Memory Management

Garbage Collection

The Dalvik VM is a memory-managed system that frequently reclaims allocated memory with the garbage collector (GC) from the shared memory, known as the heap, when it grows too large. Each process—and consequently each application—has its own VM and its own garbage collector. In spite of this, an application can fill up the heap with allocated objects that cannot be reclaimed in time, which causes memory leaks. An application continuously creates new objects during its lifetime, and the objects are created on the heap irrespective of the scope—i.e., whether instance fields or local variables are allocated. When an object is not used anymore, the GC removes the object from the heap, freeing up the memory for new allocations. The GC can reclaim the memory when an object or its parents have no more strong references1 to it. As long as an object is referenced, it is considered to be reachable and not eligible for garbage collection. But once it becomes unreachable, the GC can finalize the object and reclaim the memory. If objects are reachable without being used anymore, the GC cannot reclaim the allocated memory. Ultimately, this leakage can lead to the exhaustion of the application heap, causing termination of the process and a notification to the application through a java.lang.OutOfMemoryError. The Dalvik GC uses a very common two-step mechanism called mark and sweep. The mark step traverses the object trees and marks all objects that are not referenced by any other objects as unused. Unused objects become eligible for garbage collection, and the sweep step deallocates all the marked objects. An object is said to be unused if it is unreachable from any of the application’s garbage collection roots, which are Java objects acting as starting points for the traversal of the object trees. GC roots themselves are not considered unused or eligible for garbage collection, even though no other application object references them.
A small example of an object tree appears in Figure 6-1. It leads to the following dependency chain: 1. GC root A→A1→A2 2. GC root B→B1→B2→B3→B4 3. GC root B→B1→B2→B4 All A and B objects are linked to a GC root and will not be considered unused because all of them are referenced by other objects and connected to a GC root. Each of the C objects has a reference to it (from the other C object), but since neither of them connects to a GC root, they are considered to be an island of objects that can be removed. An object is unreachable once the last reference to it is removed, or if none of the remaining references have any connection to a GC root. Any object that is accessible from outside the heap is considered to be a GC root. This includes static objects, local objects on the stack, and threads. Thus, objects directly or indirectly referenced from a thread will be reachable during the execution of the thread.

Thread-Related Memory Leaks

There are two important characteristics of memory leaks in regard to threads: Potential risk The risk for a memory leak increases with the time a thread is alive and keeps object references. Short-lived thread instances are seldom a cause for memory leaks, but threads that are long-lived due to running long tasks, processing messages, blocking, etc., can keep references to objects that may not be required anymore. Leak size An application that occasionally leaks a small amount of memory will probably work fine for most of the time and the leakage will pass by unnoticed. But if the leak size is large—e.g., bitmaps, view hierarchies, etc.—a few leaks may be enough to exhaust the application’s heap.

Thread Execution

When the thread is executing, the Thread object itself becomes a GC root, and all objects it references are reachable. Similarly, all objects directly referenced from an executing Runnable are GC roots. Hence, while a thread executes, both the Thread and the Runnable instance can hold references to other objects that cannot be reclaimed until the thread terminates. Objects created in a method are eligible for garbage collection when the method returns, unless the method returns the object to its caller so that it can be referenced from other methods.

Inner classes

Inner classes are members of the enclosing object, like the outer class, and have access to all the other members of the outer class. Hence, the inner class implicitly has a reference to the outer class (see Figure 6-3). Consequently, threads defined as inner classes keep references to the outer class, which will never be marked for garbage collection as long as the thread is executing. In the following example, any objects in the Outer class must be left in memory, along with objects in the inner SampleThread class, as long as that thread is running.
public class Outer { public void sampleMethod() { SampleThread sampleThread = new SampleThread(); sampleThread.start(); } private class SampleThread extends Thread { public void run() { Object sampleObject = new Object(); // Do execution } } } Threads defined as local classes and anonymous inner classes have the same relations to the outer class as inner classes, keeping the outer class reachable from a GC root during execution.

Static inner classes

Static inner classes are members of the class instance of the enclosing object. Threads defined in a static inner class therefore keep references to the class of the outer object, but not to the outer object itself (Figure 6-4). Therefore, the outer object can be garbage collected once other references to it disappear. This rule, applies, for instance, in the following code.
public class Outer { public void sampleMethod() { SampleThread sampleThread = new SampleThread(); sampleThread.start(); } private static class SampleThread extends Thread { public void run() { Object sampleObject = new Object(); // Do execution } } } However, on most occasions, the programmer wants to separate the execution environment (Thread) from the task (Runnable). If you create a new Runnable as an inner class, it will hold a reference to the outer class during the execution, even if it is run by a static inner class. Code such as the following produces the situation in Figure 6-5.
public class Outer { public void sampleMethod() { SampleThread sampleThread = new SampleThread(new Runnable() { @Override public void run() { Object sampleObject = new Object(); // Do execution } }); sampleThread.start(); } private static class SampleThread extends Thread { public SampleThread(Runnable runnable) { super(runnable); } } }

The lifecycle mismatch

A fundamental reason for leakage on Android is the lifecycle mismatch between components, objects, and threads. Objects are allocated on the heap, can be eligible for garbage collection, and are kept in memory when they are referenced by threads. In Android, however, it is not only the lifecycle of the object that the application has to handle, but also the lifecycle of its components. All components—Activity, Service, BroadcastReceiver, and ContentProvider—have their own lifecycles that do not comply with their objects’ lifecycles. Leaking Activity objects is the most serious—and probably the most common—component leak. An Activity holds references, for instance, to the view hierarchy that may contain a lot of heap allocations. Figure 6-6 illustrates the component and object lifecycles of an Activity.
When the Activity component has finished, the Activity object may still remain on the heap. It is the garbage collector that determines when the Activity object can be removed. If any references to the Activity object linger after the component is destroyed, the object remains on the heap and is not eligible for garbage collection. As Figure 6-6 illustrates, multiple Activity objects for the same Activity component can coexist on the heap. Worker threads may impose a memory leak in the application, because threads can continue to execute in the background even after the component is destroyed. Thus, Figure 6-7 illustrates how an Activity object lingers on the heap long after the component has finished its lifetime. The reason is that Activity A started a worker thread that is still executing in the background. Having been created by the Activity, the thread references the Activity object.
!!! Automatically started threads pose a higher memory leakage risk than user started threads, as configuration changes and user navigation can yield many concurrent threads with Activity object references.

Thread Communication

Thread execution is a potential source for memory leaks, and so is the message passing mechanism between threads. These leaks can happen whether the executor is the UI thread or another thread created by the application. Handler is a candidate for memory leaks. The Message instance, passed between the threads, holds references to a Handler and either to data (Object) or to a task (Runnable). From the creation through the recycling of a Message, it holds a Handler reference to the consumer thread. While the message is pending in the message queue or executed on the thread, it is ineligible for garbage collection. Furthermore, the Handler and the Object or Runnable, together with all their implicit and explicit references, are still reachable from the GC root. We will look at two code examples: when sending a data message and when posting a Runnable.

Sending a data message

Data messages can be passed in various ways; the chosen implementation determines both the risk for, and the size of, a memory leak. The following code example illustrates the implementation pitfalls. public class Outer { Handler mHandler = new Handler() { public void handleMessage(Message msg) { // Handle message } }; public void doSend() { Message message = mHandler.obtainMessage(); message.obj = new SampleObject(); mHandler.sendMessageDelayed(message, 60 * 1000); } } Figure 6-9 illustrates the object reference tree in the executing thread, from the time the Message has been sent to the message queue till the time it is recycled, i.e., after the Handler has processed it. The reference chain has been shortened for brevity to cover just the key objects we want to trace.
The code example violates both memory leak characteristics: it lets a thread hold references to more objects than necessary, and it keeps the references reachable for a long time.

Posting a task message

Posting a Runnable, to be executed on a consumer Thread with a Looper, raises the same concerns as sending a Message but with an additional extra Outer class reference to watch out for: public class Outer { Handler mHandler = new Handler() { public void handleMessage(Message msg) { // Handle message } }; public void doPost() { mHandler.post(new Runnable() { public void run() { // Long running task } }); } } This simple code example posts a Runnable to a thread that calls doPost. Both the Handler and Runnable instances refer to the Outer class and increase the size of a potential memory leak, as shown in Figure 6-10.
The risk for a memory leak increases with the length of the task. Short-lived tasks are better in avoiding the risk. Once a Message object is added to the message queue, the Message is indirectly referenced from the consumer thread. The longer the Mes sage is pending, is in the queue, or does a lengthy execution on the receiving thread, the higher the risk is for a memory leak.

Avoiding Memory Leaks

Let us look at how to avoid—or mitigate—these memory leaks.

Use Static Inner Classes

Local classes, inner classes, and anonymous inner classes all hold implicit references to the outer class they are declared in. Hence, they can leak not only their own objects, but also those referenced from the outer class. Typically, an Activity and its view hierarchy can cause a major leak through the outer class reference. Instead of using nested classes with outer class references, it is preferred to use static inner classes because they reference only the global class object and not the instance object. This just mitigates the leak, because all explicit references to other instance objects from the static inner class are still live while the thread executes.

Use Weak References

As we have seen, static inner classes do not have access to instance fields of the outer class. This can be a limitation if an application would like to execute a task on a worker thread and access or update an instance field of the outer instance. For this need, java.lang.ref.WeakReference comes to the rescue: public class Outer { private int mField; private static class SampleThread extends Thread { private final WeakReference<Outer> mOuter; SampleThread(Outer outer) { mOuter = new WeakReference<Outer> (outer); } @Override public void run() { // Do execution and update outer class instance fields. // Check for null as the outer instance may have been GC'd. if (mOuter.get() != null) { mOuter.get().mField = 1; } } } } In the code example, the Outer class is referenced through a weak reference, meaning that the static inner class holds a reference to the outer class and can access the outer instance fields. The weak reference is not a part of the garbage collector’s reference counting, as all strong references, like normal references, are. So if the only remaining reference to the outer object is the weak reference from the inner class, the garbage collector sees this object as eligible for garbage collection and may deallocate the outer instance from the heap.

Stop Worker Thread Execution

Implementing Thread, Runnable, and Handler as static inner classes, nullifying explicit strong references, or using weak references will mitigate a memory leak but not totally prevent it. The executing thread may still hold some references that cannot be garbage collected. So to prevent the thread from delaying object deallocation, it should be terminated as soon as it is not required anymore.

Retain Worker Threads

Figure 6-7 shows how the lifecycle mismatch between component, activity, and thread can keep objects live longer than necessary. Typically, the long times are caused by changing the configurations of activities, where the old object is kept in memory for as long as the thread executes. By retaining the thread from the old to the new Activity and removing the thread reference from the old Activity, you can allow the Activi ty object to be garbage collected.

Clean Up the Message Queue

If a message is pending when it is no longer needed, you should remove it from the message queue so that all its referenced objects can be deallocated. Messages sent to a worker thread can be garbage collected once the worker thread finishes, but the UI thread cannot finish until the application process terminates. Therefore, cleaning up messages sent to the UI thread is a valuable way to avoid memory leaks. Both Message and Runnable can be removed from the queue: removeCallbacks(Runnable r) removeCallbacks(Runnable r, Object token) removeCallbacksAndMessages(Object token) removeMessages(int what) removeMessages(int what, Object object) The Runnables shall be removed with a reference to the instance, whereas the Messag es can be removed with the identifiers what and token.

PART II

CHAPTER 7 Managing the Lifecycle of a Basic Thread

Lifecycle

New Thread object is created. The instantiation does not set up the execution environment, so it is no heavier than any other object instantiation. Runnable When Thread.start() is called, the execution environment is set up and the thread is ready to be executed. It is now in the Runnable state. When the scheduler selects the thread for execution, the run method is called and the task is executed. Blocked/Waiting Execution can halt when the thread has to wait for a resource that is not directly accessible—for example, an I/O operation, synchronized resources used by other threads, blocking API calls, etc. But execution can also be given up explicitly: 1. Thread.sleep(): Let the thread sleep for certain amount of time and then make it available to be scheduled for execution again. 2. Thread.yield(): Give up execution, and let the scheduler make a new decision on which thread to execute. The scheduler freely selects which thread to execute, and there is no guarantee that the scheduler will choose a different thread. Terminated When the run method has finished execution, the thread is terminated and its resources can be freed up. This is the final state of the thread; no reuse of the Thread instance or its execution environment is possible. Setting up and tearing down the execution environment is a heavy operation; doing it over and over again is a sign that another solution, such as thread pools (see Chapter 9), is preferred.

Interruptions

Occasionally, an application wants to terminate the thread’s execution before it has finished its task. There is, however, no way a thread can be directly terminated. Instead, threads can be interrupted, which is a request to the thread that is should terminate, but it is the thread itself that determines whether to oblige or not. Interruptions are invoked on the thread reference: Thread t = new SimpleThread(); t.start(); // Start the thread t.interrupt(); // Request interruption Thread interruption is implemented collaboratively: the thread makes itself available to be interrupted, and other threads issue the call to interrupt it. Issuing an interruption has no direct impact on the execution of the thread; it merely sets an internal flag on the thread that marks it as interrupted. The interrupted thread has to check the flag itself to detect the interruption and terminate gracefully. A thread must implement cancellation points in order to allow other threads to interrupt it and get it to terminate: public class SimpleThread extends Thread { @Override public void run() { while (isInterrupted() == false) { // Thread is alive } // Task finished and thread terminates } } Cancellation points are implemented by checking the interrupt flag with the isInter rupted() instance method, which returns a Boolean value of true if the thread has been interrupted, and false otherwise. Typically, cancellation points are implemented in loops, or before long-running tasks are executed, to enable the thread to skip the next part in the task execution. The interrupt flag is also supported by most blocking methods and libraries; a thread that is currently blocked will throw an InterruptedException upon being interrupted. When an InterruptedException is thrown, the interrupted flag is reset—for example, isInterrupted will return false even though the thread has been interrupted. This may lead to problems further up in the thread callstack because no one will know that the thread has been interrupted. So if the thread doesn’t have to perform any cleanup upon interruption, the thread should pass the InterruptedException further up in the callstack. If cleanup is required, it should be done in the catchclause, after which the thread should interrupt itself again so that callers of the executed method are aware of the interruption, as shown in the following example: void myMethod() { try { // Some blocking call } catch (InterruptedException e) { // 1. Clean up // 2. Interrupt again Thread.currentThread().interrupt(); } } !!! Interruption state can also be checked with the Thread.interrupted() static method, which returns a Boolean value in the same way as isInterrupted(). However, Thread.interrupted() comes with a side effect: it clears the interruption flag.

Retention

A thread does not follow the lifecycle of an Android component that has started it or its underlying objects. Once a thread is started, it will execute until either its run method finishes or the whole application process terminates. Therefore, the thread lifetime can outlive the component lifetime. When the thread finishes, it may have produced a result that was meant to be used by the component, but there is no receiver available. Typically, this situation occurs on configuration changes in Activity components. The default behavior is to restart the component when its configuration has changed, meaning that the original Activity object is replaced by a new one without any knowledge of the executing background thread. Only the Activity object that started the thread knows that the thread was started, so the new Activity cannot utilize the thread’s result; it has to restart the thread over again to collect the data. This can lead to unnecessary work. For example, if a worker thread is set to download a large chunk of data, and a configuration change occurs during the download, it is a waste to throw the partial result away. Instead, a better approach is to retain the thread during the configuration change and let the new Activity object handle the thread started by the old Activity object.

Retaining a thread in an Activity

The Activity class contains two methods for handling thread retention: public Object onRetainNonConfigurationInstance() Called by the platform before a configuration change occurs. public Object getLastNonConfigurationInstance() It can be called in onCreate or onStart and returns null if the Activity is started for another reason than a configuration change. As the ThreadRetainActivity listing shows, an alive thread can be passed across Activity objects during a configuration change. public class ThreadRetainActivity extends Activity { private static MyThread t; private static class MyThread extends Thread { private ThreadRetainActivity mActivity; public MyThread(ThreadRetainActivity activity) { mActivity = activity; } private void attach(ThreadRetainActivity activity) { mActivity = activity; } ... } @Override public void onCreate(Bundle savedInstanceState) { ... Object retainedObject = getLastNonConfigurationInstance(); if (retainedObject != null) { t = (MyThread) retainedObject; t.attach(this); } } @Override public Object onRetainNonConfigurationInstance() { if (t != null && t.isAlive()) { return t; } return null; } ... } Retained objects—e.g., threads—bring their references over to the next Activity. Threads declared with references to the outer class —i.e., the Activity—will stop the garbage collector from reclaiming the old Activity and its view tree, although it will never be used anymore.

Retaining a thread in a Fragment

A Fragment normally implements part of the user interface in an Activity, but since instance retention is easier with a Fragment, the responsibility to retain Thread instances can be moved from an Activity to a Fragment. The Fragment can be added to an Activity just to handle thread retention, without containing any UI elements. In a Fragment, all that is required to retain a thread, or any other state, is to call setRetainInstance(true) in Fragment.onCreate(). The Fragment is then retained during a configuration change. The actual Fragment lifecycle is changed so that it does not get destroyed during configuration changes. Worker threads remain in the same Frag ment instance while the platform handles the retention between the Activity and Fragment. public class ThreadRetainWithFragmentActivity extends Activity { private ThreadFragment mThreadFragment; private TextView mTextView; public void onCreate(Bundle savedInstanceState) { setContentView(R.layout.activity_retain_thread); mTextView = (TextView) findViewById(R.id.text_retain); FragmentManager manager = getFragmentManager(); mThreadFragment = (ThreadFragment) manager.findFragmentByTag("threadfragment"); if (mThreadFragment == null) { FragmentTransaction transaction = manager.beginTransaction(); mThreadFragment = new ThreadFragment(); transaction.add(mThreadFragment, "threadfragment"); transaction.commit(); } } // Method called to start a worker thread public void onStartThread(View v) { mThreadFragment.execute(); } public void setText(final String text) { runOnUiThread(new Runnable() { @Override public void run() { mTextView.setText(text); } }); } } The Fragment defines the worker thread and starts it: public class ThreadFragment extends Fragment { private ThreadRetainWithFragmentActivity mActivity; private MyThread t; private class MyThread extends Thread { @Override public void run() { final String text = getTextFromNetwork(); mActivity.setText(text); } // Long operation private String getTextFromNetwork() { // Simulate network operation SystemClock.sleep(5000); return "Text from network"; } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setRetainInstance(true); } @Override public void onAttach(Activity activity) { super.onAttach(activity); mActivity = (ThreadRetainWithFragmentActivity) activity; } @Override public void onDetach() { super.onDetach(); mActivity = null; } public void execute() { t = new MyThread(); t.start(); } } Worker threads that execute across the lifecycle of multiple activities are better handled with services.

CHAPTER 8 HandlerThread: A High-Level Queueing Mechanism

HandlerThread is a thread with a message queue that incorporates a Thread, a Looper, and a MessageQueue. It is constructed and started in the same way as a Thread. Once it is started, HandlerThread sets up queuing through a Looper and MessageQueue and then waits for incoming messages to process: HandlerThread handlerThread = new HandlerThread("HandlerThread"); handlerThread.start(); mHandler = new Handler(handlerThread.getLooper()) { @Override public void handleMessage(Message msg) { super.handleMessage(msg); // Process messages here } }; HandlerThread is a convenient wrapper that automatically sets up the internal message passing mechanisms. There is only one queue to store messages, so execution is guaranteed to be sequential —and therefore thread safe—but with potentially low throughput, because tasks can be delayed in the queue. The HandlerThread sets up the Looper internally and prepares the thread for receiving messages. The internal setup gurantees that there is no race condition between creating the Looper and sending messages. The platform solves the race condition problem by making handlerThread.getLooper() a blocking call until the HandlerThread is ready to receive messages. If additional setup is required on the HandlerThread before it starts to process messages, the application should override HandlerThread.onLooperPrepared(), which is invoked on the background thread when the Looper is prepared. Limit Access to HandlerThread: A Handler can be used to pass any data message or task to the HandlerThread, but the access to the Handler can be limited by keeping it private in a subclass implementation.

Lifecycle

A running HandlerThread instance processes messages that it receives until it is terminated. A terminated HandlerThread can not be reused. To process more messages after termination, create a new instance of HandlerThread. The lifecycle can be described in a set of states: 1. Creation: The constructor for HandlerThread takes a mandatory name argument and an optional priority for the thread: HandlerThread(String name) HandlerThread(String name, int priority) The default priority is Process.THREAD_PRIORITY_DEFAULT—the same priority as the UI thread—and can be lowered to Process.THREAD_PRIORITY_BACKGROUND to execute noncritical tasks. 2. Execution: The HandlerThread is active while it can process messages; i.e., as long as the Looper can dispatch messages to the thread. The dispatch mechanism is set up when the thread is started through HandlerThread.start and is ready when either HandlerThread.getLooper returns. 3. Reset: The message queue can be reset so that no more of the queued messages will be processed, but the thread remains alive and can process new messages. public void resetHandlerThread() { mHandler.removeCallbacksAndMessages(null); } The argument to removeCallbacksAndMessages removes the message with that specific identifier. null, shown here, removes all the messages in the queue. 4. Termination: A HandlerThread is terminated either with quit or quitSafely, which corresponds to the termination of the Looper. With quit, no further messages will be dispatched to the HandlerThread, whereas quitSafely ensures that messages that have passed the dispatch barrier are processed before the thread is terminated. You can also send an interrupt to the HandlerThread to cancel the currently executing message. public void stopHandlerThread(HandlerThread handlerThread) { handlerThread.quit(); handlerThread.interrupt(); } A terminated HandlerThread instance has reached its final state and cannot be restarted. A HandlerThread can also be terminated by sending a finalization task to the Handler that quits the Looper, and consequently the HandlerThread: handler.post(new Runnable() { @Override public void run() { Looper.myLooper().quit(); } });

Use Cases

A HandlerThread is applicable to many background execution use cases, where sequential execution and control of the message queue is desired. This section shows a range of use cases where HandlerThread comes in handy.

Repeated Task Execution

Many Android components relieve the UI thread by executing tasks on background threads. If it is not necessary to have concurrent execution in several threads—for example, multiple independent network requests—the HandlerThread provides a simple and efficient way to define tasks to be executed sequentially in the background. Hence, the execution setup for this situation is the UI thread—available by default—and a HandlerThread with a lifecycle that follows that of the component. Thus, Handler Thread.start is called upon at the start of a component and HandlerThread.quit upon the termination of the component. In between, there is a background thread available for offloading the UI thread. The tasks to execute can be either predefined Runnable or Message instances. !!! Don’t mix long or blocking tasks with shorter tasks, because the shorter ones may be postponed unnecessarily. Instead, split execution among several HandlerThread or use an Executor.

Related Tasks

Interdependent tasks—e.g., those that access shared resources, such as the file system —can be executed concurrently, but they normally require synchronization to render them thread safe and ensure uncorrupted data. The sequential execution of Handler Thread guarantees thread safety, task ordering, and lower resource consumption than the creation of multiple threads. Therefore, it is useful for executing nonindependent tasks.

Example: Data persistence with SharedPreferences

SharedPreferences is persistent storage for user preferences on the file system. Consequently, it should only be accessed from background threads. But file system access is not thread safe, so a HandlerThread—with sequential execution—makes the access thread safe without adding synchronization, which is normally a simpler approach. The following example shows how a HandlerThread can carry out the job: public class SharedPreferencesActivity extends Activity { TextView mTextValue; /** * Show read value in a TextView. */ private Handler mUiHandler = new Handler() { 1 @Override public void handleMessage(Message msg) { super.handleMessage(msg); if (msg.what == 0) { Integer i = (Integer)msg.obj; mTextValue.setText(Integer.toString(i)); } } }; private class SharedPreferenceThread extends HandlerThread { 2 private static final String KEY = "key"; private SharedPreferences mPrefs; private static final int READ = 1; private static final int WRITE = 2; private Handler mHandler; public SharedPreferenceThread() { super("SharedPreferenceThread", Process.THREAD_PRIORITY_BACKGROUND); mPrefs = getSharedPreferences("LocalPrefs", MODE_PRIVATE); } @Override protected void onLooperPrepared() { super.onLooperPrepared(); mHandler = new Handler(getLooper()) { @Override public void handleMessage(Message msg) { switch(msg.what) { case READ: mUiHandler.sendMessage(mUiHandler.obtainMessage(0, mPrefs.getInt(KEY, 0))); break; case WRITE: SharedPreferences.Editor editor = mPrefs.edit(); editor.putInt(KEY, (Integer)msg.obj); editor.commit(); break; } } }; } public void read() { mHandler.sendEmptyMessage(READ); } public void write(int i) { mHandler.sendMessage(Message.obtain(Message.obtain(mHandler, WRITE, i))); } } private int mCount; private SharedPreferenceThread mThread; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_shared_preferences); mTextValue = (TextView) findViewById(R.id.text_value); mThread = new SharedPreferenceThread(); mThread.start(); 3 } /** * Write dummy value from the UI thread. */ public void onButtonClickWrite(View v) { mThread.write(mCount++); } /** * Initiate a read from the UI thread. */ public void onButtonClickRead(View v) { mThread.read(); } /** * Ensure that the background thread is terminated with the Activity. */ @Override protected void onDestroy() { super.onDestroy(); mThread.quit(); 4 } } 1 Handler to the UI thread, used by the background thread to communicate with the UI thread. 2 Background thread that reads and writes values to SharedPreferences. 3 Start background thread when the Activity is created. 4 Stop background thread when the Activity is destroyed.

Summary

HandlerThread provides a single-threaded, sequential task executor with fine-grained message control. It is the most fundamental form of message passing to a background thread, and it can be kept alive during a component lifecycle to provide low-resource background execution. The flexibility of message passing makes the HandlerThread a strong candidate for customizable sequential executors.

CHAPTER 9 Control over Thread Execution Through the Executor Framework

Executor

The fundamental component of the Executor framework is the simple Executor interface. Its main goal is to separate the creation of a task (such as a Runnable) from its execution. The interface includes just one method: public interface Executor { void execute(Runnable command); } Executor provides a better separation between submitting a task and its actual execution. An Executor implementation in its simplest form creates a thread for every task: public class SimpleExecutor implements Executor { @Override public void execute(Runnable runnable) { new Thread(runnable).start(); } } The SimpleExecutor provides no more functionality than creating threads as anonymous inner classes directly, so it may look superfluous, but it provides advantages nevertheless: 1. decoupling You can alter the implementation in the Executor without affecting the code that submits the task through execute(Runnable). 2) scalability You can scale the number of threads that handle the tasks. 3) reduced memory references. SimpleExecutor holds no reference to the outer class, as an anonymous inner class does, and hence reduces the memory referenced by the thread. An example of a more elaborate Executor is shown in Example 9-2. private static class SerialExecutor implements Executor { final ArrayDeque<Runnable> mTasks = new ArrayDeque<Runnable>(); Runnable mActive; public synchronized void execute(final Runnable r) { mTasks.offer(new Runnable() { public void run() { try { r.run(); } finally { scheduleNext(); } } }); if (mActive == null) { scheduleNext(); } } protected synchronized void scheduleNext() { if ((mActive = mTasks.poll()) != null) { THREAD_POOL_EXECUTOR.execute(mActive); } } } It implements a serial task executor. The SerialExecutor implements a producer-consumer pattern, where producer threads create Runnable tasks and place them in a queue. Meanwhile, consumer threads remove and process the tasks off the queue. All tasks are put at the end of the double-ended queue through mTasks.offer(), so the result is a FIFO ordering of the submitted tasks. SerialExecutor constitutes an execution environment that guarantees serial execution with the the ability to process tasks on different threads. The most useful executor implementation is the thread pool.

Thread Pools

A thread pool is the combination of a task queue and a set of worker threads that forms a producer-consumer setup. There are several advantages with thread pools over executing every task on a new thread (thread-per-task pattern): • The worker threads can be kept alive to wait for new tasks to execute. This means that threads don’t have to be created and destroyed for every task, which compro‐ mises performance. • The thread pool is defined with a maximum number of threads so that the platform isn’t overloaded with background threads—that consume application memory— due to many background tasks. • The lifecycle of all worker threads are controlled by the thread-pool lifecycle.

Predefined Thread Pools

The predefined Executors thread pools are based on the ThreadPoolExecutor class, which can be used directly to create thread pool behavior in detail. ThreadPoolExecutor executor = new ThreadPoolExecutor( int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue); Core pool size The lower limit of threads that are contained in the thread pool. Maximum pool size The maximum number of threads that can be executed concurrently. Maximum idle time (keep-alive time) Idle threads are kept alive in the thread pool to be prepared for incoming tasks to process, but if the alive time is set, the system can reclaim noncore pool threads. The alive time is configured in TimeUnits, the unit the time is measured in. Task queue type An implementation of BlockingQueue that holds tasks added by the consumer until they can be processed by a worker thread.

Designing a Thread Pool

It’s good practice to base the thread pool size on the underlying hardware, more exactly the number of available CPUs. Android can retrieve the number of CPUs, referred to as N, from the Runtime class: int N = Runtime.getRuntime().availableProcessors(); It is not an exact science to find the optimal number of threads, and fortunately you don’t have to be that exact. There exist both theoretical and empirical suggestions of the thread pool size in the literature: for example, N+1 threads in Java Concurrency in Practice by Brian Goetz et al. (Addison-Wesley) for compute-intensive tasks, whereas Kirk Pepperdine suggests that a sizing of 2*N threads performs well. One common application behavior is to lower thread priorities so they don’t compete with the UI thread. Worker threads are configured through implementations of the ThreadFactory inter‐ face. Thread pools can define properties on the worker threads, such as priority, name, and exception handler. class LowPriorityThreadFactory implements ThreadFactory { private static int count = 1; public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("LowPrio " + count++); t.setPriority(4); t.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() { @Override public void uncaughtException(Thread t, Throwable e) { Log.d(TAG, "Thread = " + t.getName() + ", error = " + e.getMessage()); } }); return t; } } Executors.newFixedThreadPool(10, new LowPriorityThreadFactory());

Shutting Down the Thread Pool

The lifecycle is managed and observed through the ExecutorService interface that extends Executor and is implemented by ThreadPoolExecutor. Executors should not process tasks for longer than necessary; doing so can potentially leave a lot of active threads executing in the background for no good reason, holding on to memory that is not eligible for garbage collection. Typically, a fixed-size thread pool can keep a lot of threads alive in the background. Explicit termination is required to make the executor finish. Two methods—with somewhat different impacts—are available: void shutdown() List<Runnable> shutdownNow()
Consequently, shutdown() is considered to be a graceful termination of the executor, where both the executing and queued tasks are allowed to finish. shutdownNow() returns the queued tasks to the caller and tries to terminate currently executing tasks through interrupts. Hence, tasks should implement a cancellation policy to make them manageable. Without a cancellation policy, the tasks in the executor will terminate no earlier with shutdownNow() than with shutdown(). !!! Once the thread pool has initiated a shutdown, it cannot be reused for tasks. The application will have to create a new thread pool for subsequent tasks or to execute tasks returned by shutdownNow().

Task Management

The execution environment running tasks can also be managed. In this section, we will look into the individual task and how we manage them.

Task Representation

The Runnable interface has been around since the first version of Java. public interface Runnable { public void run(); } Callable offers a much larger set of functionalities. Callable defines a call method that can return a value—defined as a generic type— and throw an exception: public interface Callable<V> { public <V> call() throws Exception; } A Callable task cannot be directly executed by Thread instances because it was intro‐ duced first in Java 5. Instead, the execution environment should be based on Executor Service implementations—e.g., thread pools—to process the tasks. Once a Callable task is processed by the ExecutorService, it can be observed and controlled through the Future interface, which is available after task submission. Methods provided by Future are: boolean cancel(boolean mayInterruptIfRunning) V get() V get(long timeout, TimeUnit unit) boolean isCancelled() boolean isDone() The result from an asynchronous computation is retrieved with the blocking get methods. A submitted task can be cancelled, in which case the executor tries to avoid executing it. If the task is still in the queue, it will be removed and is never executed. If it is currently executing, cancel(false) will not affect it, but cancel(true) interrupts the thread executing the task, and the task can terminate prematurely if it has implemented a cancellation policy.

Submitting Tasks

Before a task is submitted to a thread pool, it is by default an empty queue without threads. The state of the thread pool and queue of waiting threads determine how a thread pool can respond to a new task: • If the core pool size has not been reached yet, a new thread can be created so the task can start immediately. • If the core pool size has been reached but the queue has open slots, the task can be added to the queue. • If the core pool size has been reached and the queue is full, the task must be rejected. There are numerous ways of submitting tasks to a thread pool, both individually and batched. When there are multiple tasks to execute concurrently, they can be submitted one-by-one with the execute or submit methods. But the platform provides convenience methods that handle common use cases for batched submissions: invokeAll and invokeAny.

Individual submission

ExecutorService executor = Executors.newSingleThreadExecutor(); executor.execute(new Runnable() { public void run() { doLongRunningOperation(); } }); The Executor interface can handle only Runnable tasks, but the ExecutorService extension contains more general methods allowing tasks to be submitted as instances of either Runnable or Callable. Every submitted task is represented by a Future to manage and observe the task, but only the Callable can be used for retrieving a result: Callable ExecutorService executor = Executors.newSingleThreadExecutor(); Future<Object> future = executor.submit(new Callable<Object> () { public Object call() throws Exception { Object object = doLongRunningOperation(); return object; } }); // Blocking call - Returns 'object' from the Callable Object result = future.get(); Runnable without result ExecutorService executor = Executors.newSingleThreadExecutor(); Future<?> future = executor.submit(new Runnable() { public void run() { doLongRunningOperation(); } }); // Blocking call - Always return null Object result = future.get();

invokeAll

ExecutorService.InvokeAll executes mutiple independent tasks concurrently and lets the application wait for all tasks to finish by blocking the calling thread until all asynchronous computations are done or a timeout has expired: List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit) Example 9-5 utilizes invokeAll to execute two independent tasks concurrently on worker threads and combine the results when both have finished. It is typically used for retrieving network data from two different locations, where the results are mashed together before being used. public class InvokeActivity extends Activity { private static final String TAG = "InvokeActivity"; private TextView textStatus; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_invoke); textStatus = (TextView) findViewById(R.id.text_status); } public void onButtonClick(View v) { SimpleExecutor simpleExecutor = new SimpleExecutor(); simpleExecutor.execute(new Runnable() { @Override public void run() { List<Callable<String>> tasks = new ArrayList<Callable<String>>(); tasks.add(new Callable<String>() { @Override public String call() throws Exception { return getFirstPartialDataFromNetwork(); } }); tasks.add(new Callable<String>() { @Override public String call() throws Exception { return getSecondPartialDataFromNetwork(); } }); ThreadPoolExecutor executor = (ThreadPoolExecutor)Executors.newFixedThreadPool(2); try { Log.d(TAG, "invokeAll"); List<Future<String>> futures = executor.invokeAll(tasks); Log.d(TAG, "invokeAll after"); final String mashedData = mashupResult(futures); textStatus.post(new Runnable() { @Override public void run() { textStatus.setText(mashedData); } }); Log.d(TAG, "mashedData = " + mashedData); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } executor.shutdown(); } }); } private String getFirstPartialDataFromNetwork() { Log.d(TAG, "ProgressReportingTask 1 started"); SystemClock.sleep(10000); Log.d(TAG, "ProgressReportingTask 1 done"); return "MockA"; } private String getSecondPartialDataFromNetwork() { Log.d(TAG, "ProgressReportingTask 2 started"); SystemClock.sleep(2000); Log.d(TAG, "ProgressReportingTask 2 done"); return "MockB"; } private String mashupResult(List<Future<String>> futures) throws ExecutionException, InterruptedException { StringBuilder builder = new StringBuilder(); for (Future<String> future : futures) { builder.append(future.get()); } return builder.toString(); } }

invokeAny

ExecutorService.invokeAny adds a collection of tasks to an executor, returns the result from the first finished task, and disregards the rest of the tasks. This can be useful in situations where you are doing a search through many different data sets and want to stop as soon as the item is found, or any similar situation where you need results from just one of the tasks you are running in paralle.

Rejecting Tasks

Task addition can fail for two reasons: because both the number of worker threads and the queue are saturated, or because the executor has initiated a shutdown.The application can customize rejection handling by providing an implementation of RejectedExecutionHandler to the thread pool. RejectedExecutionHandler is an interface with a single method that is called upon task rejection: void rejectedExecution(Runnable r, ThreadPoolExecutor executor)

ExecutorCompletionService

A thread pool manages a task queue and the worker threads, but does not manage the results of the finished task. That is done by the ExecutorCompletionService. It holds a completion queue (based on a BlockingQueue) of finished tasks, as shown in Figure 9-3. When a task finishes, a Future object is placed in the queue, which is available to consumer threads so they can process the results in the order that the tasks have finished.
Displaying multiple downloaded images in an Activity is a common use case. The UI is populated asynchronously, and a downloaded image should be displayed as soon as it is available, independently of the other image downloads. This is a job that fits well with an ExecutorCompletionService, because downloaded images can be put in the completion queue and processed as soon as they are available. public class ECSImageDownloaderActivity extends Activity { private static final String TAG = "ECSImageDownloaderActivity"; private LinearLayout layoutImages; private class ImageDownloadTask implements Callable<Bitmap> { 1 @Override public Bitmap call() throws Exception { return downloadRemoteImage(); } private Bitmap downloadRemoteImage() { SystemClock.sleep((int) (5000f - new Random().nextFloat() * 5000f)); return BitmapFactory.decodeResource(ECSImageDownloaderActivity.this.getResources(), R.drawable.ic_launcher); } } private class DownloadCompletionService extends ExecutorCompletionService { 2 private ExecutorService mExecutor; public DownloadCompletionService(ExecutorService executor) { super(executor); mExecutor = executor; } public void shutdown() { mExecutor.shutdown(); } public boolean isTerminated() { return mExecutor.isTerminated(); } } private class ConsumerThread extends Thread { 3 private DownloadCompletionService mEcs; private ConsumerThread(DownloadCompletionService ecs) { this.mEcs = ecs; } @Override public void run() { super.run(); try { while(!mEcs.isTerminated()) { 4 Future<Bitmap> future = mEcs.poll(1, TimeUnit.SECONDS); 5 if (future != null) { addImage(future.get()); } } } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } } public void onCreate(Bundle savedInstanceState) { 6 super.onCreate(savedInstanceState); setContentView(R.layout.activity_ecs_image_downloader); layoutImages = (LinearLayout) findViewById(R.id.layout_images); DownloadCompletionService ecs = new DownloadCompletionService(Executors.newCachedThreadPool()); new ConsumerThread(ecs).start(); for (int i = 0; i < 5; i++) { ecs.submit(new ImageDownloadTask()); } ecs.shutdown(); } private void addImage(final Bitmap image) { 7 runOnUiThread(new Runnable() { @Override public void run() { ImageView iv = new ImageView(ECSImageDownloaderActivity.this); iv.setImageBitmap(image); layoutImages.addView(iv); } }); } } 1 A Callable instance that represents a task producing a result. It returns a bitmap image when downloaded over a network connection. 2 A ExecutorCompletionService that holds the Executor and exposes lifecycle methods—shutdown and isTerminated—to control the executor. 3 A consumer thread that polls the completion queue for results from finished tasks. 4 If the executor is terminated, all tasks have finished and it is safe to stop polling the completion queue for more tasks. The consumer thread will finish once the executor is terminated. 5 Polling mechanism: the consumer thread waits for one second, in every iteration, for finished tasks. After that it continues execution to check again if the executor has terminated, as described in the previous item. 6 Create the Activity that initiates DownloadCompletionService with a cached thread pool and a ConsumerThread. Five download tasks are submitted. 7 Shut down the executor gently; let the submitted tasks finish before the worker threads terminate.

Summary

The platform’s concrete implementations of Executor— thread pools—provide applications with better thread management and ways to manage concurrency in sophisticated ways.

CHAPTER 10 Tying a Background Task to the UI Thread with AsyncTask

Executing a background task in an Activity in which the UI should be updated before, during, and after the execution—it is a great option for using AsyncTask. As the name indicates, an AsyncTask is an asynchronous task that is executed on a background thread. The only method you need to override in the class is doInBackground(). public class FullTask extends AsyncTask<Params, Progress, Result> { @Override protected void onPreExecute() { ... } @Override protected Result doInBackground(Params... params) { ... } @Override protected void onProgressUpdate(Progress... progress) { ... } @Override protected void onPostExecute(Result result) { ... } @Override protected void onCancelled(Result result) { ... } } Params Input data to the task executed in the background. Progress Progress data reported from the background thread—i.e., from doInBackground— to the UI thread in onProgressUpdate. Result The result produced from the background thread and sent to the UI thread.

Creation and Start

The execute method should be called from the UI thread; otherwise, the onPreExecute callback will not occur on the UI thread. Execution is a one-shot task; calling it more than once generates an IllegalStateException error.

Cancellation

If the UI thread decides not to use the results of an AsyncTask, it can send a termination request through a call to cancel(boolean): // Start the task AsyncTask task = new MyAsyncTask().execute(/* Omitted */); // Cancel the task task.cancel(true); If the argument to the call is false, the call merely sets a flag that the background thread can check through isCancelled(). If the argument is true, an interrupt is also sent. When it receives a cancellation request, the task skips the call to onPostExecute and calls one of the cancel callbacks—onCancelled() or onCancelled(Result)—instead.
The strongest checkpoint condition is isCancelled, because it observes the actual call to cancel and not the interruption. Hence, the strongest cancellation policy is to combine checkpoints and interrupt-handling as follows: public class InterruptionTask extends AsyncTask<String, Void, Void> { @Override protected Void doInBackground(String... s) { try { while (!isCancelled()) { doLongInterruptibleOperation(s[0]); } } catch (InterruptedException iex) { // Do nothing. Let's just finish. } return null; } }

States

An AsyncTask has the following possible states: PENDING, RUNNING and FINISHED, in that order: PENDING The AsyncTask instance is created, but execute has not been called on it. RUNNING Execution has started; i.e., execute is called. The task remains in this state when it finishes, as long as its final method (such as onPostExecute) is still running. FINISHED Both the background execution and the optional final operation—onPostExecute or onCancelled—is done. Backward transitions are not possible, and once the task is in RUNNING state, it is not possible to start any new executions.

Background Task Execution

Because AsyncTask executes its tasks asynchronously, multiple tasks can be executed either sequentially or concurrently.
executeOnExecutor(Executor, Params…) Added in API level 11 for configuring the actual execution environment on which the task is processed. It can utilize internal execution environments or use a custom Executor. The Executor argument executeOnExecutor can be one of the following execution environments: AsyncTask.THREAD_POOL_EXECUTOR Tasks are processed concurrently in a pool of threads. In KitKat, the thread pool sizing is based on the number of available CPU cores: N+1 core threads and a maximum of 2*N+1 threads, and the work queue can hold 128 tasks. Hence, a device with four available cores can hold a maximum of 137 tasks. AsyncTask.SERIAL_EXECUTOR A sequential task scheduler that ensures thread-safe task execution. It contains no threads of its own, relying instead on THREAD_POOL_EXECUTOR for execution. It stores the tasks in an unbounded queue and passes each one to the THREAD_POOL_EXECUTOR to be executed in sequence. The tasks can be executed in different threads of the thread pool, but the SERIAL_EXECUTOR guarantees that consecutive tasks are not added to the thread pool until the previous task has finished, so thread safety is preserved. Both execution environments use the AsyncTask worker threads for the doInBackground callbacks. The threads have no Looper attached, so the AsyncTask cannot receive messages from other threads. Furthermore, the threads’ priority is lowered to Process.THREAD_PRIORITY_BACKGROUND so that it will interfere less with the UI thread.

Application Global Execution

AsyncTask implementations can be defined and executed from any component in the application, and several instances in the RUNNING state can coexist. However, all AsyncTask instances shared an application-wide, global execution property (Figure 10-3). That means that even if two different threads launch two different tasks (as in the following example) at the same time, they will be executed sequentially.
It does not matter whether the AsyncTask implementations are executed from an Activity, a Service, or any other part of the application—they still use the same application global execution environment and run sequentially. !!! The application global execution of AsyncTask instances poses a risk that the execution environment gets saturated and that background tasks get delayed, or worse still, not executed at all.

Execution Across Platform Versions

targetSdkVersion<13 execute keeps concurrent execution, even on platforms with API level 13 or higher. targetSdkVersion>=13 execute causes sequential execution on platforms with API level 13 or higher. Because checking the API for every execution is tedious, you can define a wrapper class that can be implemented to handle the platform check: public class ConcurrentAsyncTask { public static void execute(AsyncTask as) { if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB_MR2) { as.execute(...); } else { as.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, ...); } } } The caller passes AsyncTask it wants executed to the wrapper class: ConcurrentAsyncTask.execute(new MyAsyncTask());

Custom Execution

To circumvent global execution, tasks should be processed in a custom Executor: new AsyncTask().executeOnExecutor(Params, MyCustomExecutor); The custom executor replaces the execution environment in the AsyncTask.

Example: Nonglobal sequential execution

Sequential execution that is shared globally in an application may cause unexpected execution delays if a task from one component has to wait for a task from another component to finish. Hence, to utilize sequential execution—but avoid the application global behavior—a custom executor should be shared between the tasks. public class EatApplication extends Application { private Executor customSequentialExecutor; public Executor getCustomSequentialExecutor() { if (customSequentialExecutor == null) { customSequentialExecutor = Executors.newSingleThreadExecutor(); } return customSequentialExecutor; } } public class MyActivity extends Activity { private void executeTaskSequentially() { new MyActivityAsyncTask().executeOnExecutor(((EatApplication)getApplication).getCustomSequentialExecutor()); } } public class MyService extends Service { private void executeTaskSequentially() { new MyServiceAsyncTask().executeOnExecutor(((EatApplication)getApplication).getCustomSequentialExecutor()); } }

AsyncTask Alternatives

As we have seen in this chapter, it has a couple of concerns you need to consider: • Because AsyncTask has a global execution environment, the more tasks you execute with an AsyncTask, the higher the risk that tasks will not be processed as expected because there are other tasks in the application that hold the execution environment. • Inconsistency in execution environments over different platform versions makes it more difficult to either optimize execution for performance (concurrent execution) or thread safety (sequential execution). The AsyncTask is often overused in applications, due to its simplicity. It is not a silverbullet solution for asynchronous execution on Android. For many use cases, you should look into alternative techniques, for reasons of architecture, program design, or just because they are less error prone.

CHAPTER 11 Services

Two risks are inherent in using regular threads instead of services for background operation: Decouple lifecycles of components and threads The thread lifecycle is independent of the Android components and their underlying Java object lifecycles. A thread continues to run until the task either finishes or the process is killed, even after the component that started the thread finishes itself. Threads may keep references to Java objects so that they cannot be garbage collected until the thread terminates. Lifecycles of the hosting processes If the runtime terminates the process, all of its threads are terminated. Thus, background tasks are terminated and not restarted by default when the process is restored. A process with no active components has a low ranking and is likely to be eligible for termination. This may cause unexpected termination of background tasks that should be allowed to finish. For example, an Activity that stores user data to a database in a background thread while the user navigates back leaves an empty process if there are no other components running. This increases the risk of process termination, aborting the background thread before it can persist the data. A Service can mitigate both the risk for memory leaks and the risk of having tasks terminated prematurely. The Service has a lifecycle that can be controlled from background threads. Service component can be active while the background thread runs and be destroyed when it finishes, which enables better lifecycle control.
As Figure 11-1 illustrates, the BroadcastReceiver and Activity lifecycles are decoupled from the background thread’s execution, whereas the Service lifecycle can end when the background task is done. To offload background execution, a BroadcastReceiver or Activity should start a Service that then starts a thread, as shown in Figure 11-2.

Local, Remote, and Global Services

Local service The Service runs in the same process as the invoking component; i.e., the components run on the same UI thread and share the same heap memory area. Hence, the Service can share Java objects with clients so that the shared objects run on the calling thread in the client. Private remote service The Service runs in a remote process but is accessible only to client components that belong to the application. The remote process has its own UI thread. Global remote service The Service is exposed to other applications. It has the same properties as the private remote service with its own UI thread, heap memory, and execution on binder threads, but it cannot be referred to by the Service class name because that is not known to external applications. Instead, external access is provided through intent filters.

Creation and Execution

Services are defined as extensions of the Service class and must be defined in the Android Manifest. <service android:name="com.wifill.eat.EatService"/> A private remote process has an attribute value that starts with a colon (“:”): <service android:name="com.wifill.eat.EatService" android:process=":com.wifill.eat.PrivateProcess"/> Execution in a global remote process—accessible from other applications with the right permissions—is defined by leading off the process name with a capital letter: <service android:name="com.wifill.eat.EatService" android:process="Com.wifill.eat.PrivateProcess"> <intent-filter> <action android:name="..." /> <category android:name="..." /> </intent-filter> </service>

Lifecycle

A Service component is active between the callbacks to onCreate and onDestroy— both are called once per lifecycle—where the implementation can initialize and clean up data, respectively: public class EatService extends Service { @Override public void onCreate() { /* Initialize component */ } @Override public void onDestroy() { /* Clean up used resources */ } @Override public IBinder onBind(Intent intent) { /* Return communication interface */ } } There are two types of services: Started Service Created by the first start request and destroyed by the first stop request. In between, start requests only pass data to the Service. Bound Service Created when the first component binds to the Service and destroyed when all components have unbound from it. In other words, a bounded Service lifecycle is based on the number of binding components. As long at least one component is bound to the Service, it stays active.
If one process starts the component and others bind to it, it lasts as long as both conditions hold: it will not be terminated until it is explicitly stopped and all processes have unbound from it.

Started Service

Components invoke Context.startService(Intent) to send start requests to a Ser vice, which can be invoked by multiple components and multiple times from every component during a lifecycle. The first start request creates and starts the Service, whereas consecutive start requests just pass on the Intent to the started Service so that the data conveyed in the Intent can be processed. Started services must implement an onStartCommand method that handles start requests. The method is invoked each time a start request (Context.startService) from a client component is ready to be processed. Start requests are delivered sequentially to onStartCommand and remain pending in the runtime until preceding start requests are processed or offloaded from the UI thread. In spite of the sequential processing of start requests, calls to startService do not block even if they have to wait to be processed in the Service. A start request supplies an Intent that conveys data that can be processed in the Service. onStartCommand is executed on the UI thread, so you should spawn background threads within the method to execute long-running operations, not only to preserve responsiveness but also to enable concurrent execution of multiple start requests. The sequential processing of onStartCommand on the UI thread guarantees thread safety. No synchronization is required unless the tasks are processed concurrently on background threads spawned from the UI thread.

Options for Restarting

Like any Android application, your Service may be terminated by the runtime if there are too many processes running on the device. In fact, as a background process, your Service has a greater chance of being killed than many other processes. The return value of onStartCommand and the second argument (the delivery method flag) let you control what happens after your Service is terminated. Here are sample use cases for the return value: Restart a single background task In this scenario, you want the Service to execute a background task that should be controlled from the client components. The Service should not finish until one component has invoked stopService. Return START_STICKY so that the Service is always restarted and the background task can be restarted. Ignore background tasks In this scenario, the Service executes a task that should not be resumed after process termination. An example might be a periodic task that can wait until its next scheduled execution. Return START_NOT_STICKY so that the Service will not be automatically restarted. Restart unfinished background tasks In this scenario, the Service executes background tasks that you want to be resumed. Return START_REDELIVER_INTENT so that all the Intents are redelivered, and the tasks can be restarted on new threads. Furthermore, the tasks can be configured with the same data from the Intent as the original task. The following example contains a user-controlled BluetoothService to set up and cancel the pairing. When the BluetoothService is started—i.e., onStartCommand is called—it initiates a Thread to handle the pairing and keeps a state variable—mListening —to ensure that only one pairing operation is active. Consequently, only one thread at the time will be alive. public class BluetoothService extends Service { private static final String TAG = "BluetoothService"; public static final String COMMAND_KEY = "command_key"; public static final String COMMAND_START_LISTENING = "command_start_discovery"; private static final UUID MY_UUID = new UUID(323476234, 34587387); private static final String SDP_NAME = "custom_sdp_name"; private BluetoothAdapter mAdapter; private BluetoothServerSocket mServerSocket; private boolean mListening = false; private Thread listeningThread; public IBinder onBind(Intent intent) { return null; } @Override public void onCreate() { super.onCreate(); mAdapter = BluetoothAdapter.getDefaultAdapter(); } @Override public int onStartCommand(Intent intent, int flags, int startId) { if (mAdapter != null) { if (intent.getStringExtra(COMMAND_KEY).equals(COMMAND_START_LISTENING) && mListening == false) { 1 startListening(); } } return START_REDELIVER_INTENT; 2 } private void startListening() { mListening = true; listeningThread = new Thread(new Runnable() { @Override public void run() { BluetoothSocket socket = null; try { mServerSocket = mAdapter.listenUsingInsecureRfcommWithServiceRecord(SDP_NAME, MY_UUID); socket = mServerSocket.accept(); 3 if (socket != null) { // Handle BT connection } } catch (IOException e) { Log.d(TAG, "Server socket closed"); } } }); listeningThread.start(); } private void stopListening() { mListening = false; try { if (mServerSocket != null) { mServerSocket.close(); 4 } } catch (IOException e) { e.printStackTrace(); } } @Override public void onDestroy() { super.onDestroy(); stopListening(); 5 } } 1 Control the number of threads so that only one pairing at the time can be done. 2 If the process is shut down, the Intent will be redelivered so that we can resume the pairing. 3 Blocking call. 4 Release the blocking call. 5 The Service is destroyed and stops the pairing so that the background thread can finish. The BluetoothActivity controls the Service lifecycle with a start command that initiates the pairing and then stops the pairing by destroying the BluetoothService that manages the background thread: public class BluetoothActivity extends Activity { public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_bluetooth); } public void onStartListening(View v) { Intent intent = new Intent(this, BluetoothService.class); intent.putExtra(BluetoothService.COMMAND_KEY, BluetoothService.COMMAND_START_LISTENING); startService(intent); } public void onStopListening(View v) { Intent intent = new Intent(this, BluetoothService.class); stopService(intent); } }

Task-Controlled Service

Task-controlled services are typically used to ensure that background threads can be allowed to finish execution with reduced risk of being stopped due to process termination. When the Service stops itself with stopSelf, control over the Service component’s lifecycle lies with the processed task. In other words, the task determines when the component will be destroyed. The lifetime of the background thread determines the lifetime of the Service. Therefore, the component is always active while the task is running on the background thread, which raises its chances of being kept alive. So task-controlled services allow applications to utilize a Service for long-running operations on background threads.

Example: Concurrent download

This example illustrates the use of a concurrent file download executor running in a Service. The time it takes to download a file over the network is nondeterministic— and often very long—because it depends on both the file size and the network connection quality. Therefore, we want to avoid downloading the files from an Activity because the user can navigate away from the application during download, leaving an empty process that can be terminated by the runtime before the download has finished. Instead, we let the download be handled by a task-controlled Service that is independent of user navigation and reduces the probability that the runtime terminates the process during the download. <service android:name=".DownloadService"> <intent-filter> <action android:name="com.wifill.eat.ACTION_DOWNLOAD" > <data android:scheme="http"/> </intent-filter> </service> The file download can be triggered from any application component by issuing an Intent with an ACTION_DOWNLOAD action: public class DownloadActivity extends Activity { String mUrl = ...; // url details omitted public void onStartDownload(View v) { Intent intent = new Intent("com.wifill.eat.ACTION_DOWNLOAD"); intent.setData(Uri.parse(mUrl)); startService(intent); } } The DownloadService is defined to be started and not bound to, and it stops once all the start requests have been processed. public class DownloadService extends Service { private ExecutorService mDownloadExecutor; private int mCommandCount; 1 public IBinder onBind(Intent intent) { 2 return null; } @Override public void onCreate() { super.onCreate(); mDownloadExecutor = Executors.newFixedThreadPool(4); } @Override public void onDestroy() { super.onDestroy(); mDownloadExecutor.shutdownNow(); } @Override@Override public int onStartCommand(Intent intent, int flags, int startId) { synchronized (this) { mCommandCount++; } if (intent != null) { downloadFile(intent.getData()); } return START_REDELIVER_INTENT; 3 } private void downloadFile(final Uri uri) { mDownloadExecutor.submit(new Runnable() { @Override public void run() { // Simulate long file download SystemClock.sleep(10000); synchronized (this) { 4 if (--mCommandCount <= 0) { stopSelf(); } } } }); } } 1 Track the number of ongoing file downloads. The variable is incremented when a start request is received and decremented when the background task is finished. 2 The interface requires an onBind method to be implemented, but because this is a started—and not a bound—Service, we do not have to return an IBinder implementation; instead return null. 3 If the process is terminated by the runtime, we want the Service to be restarted with the intents from the start requests that hold the URLs to download so that the downloads can be resumed. Hence, we return START_REDELIVER_INTENT. 4 When a download has finished, the task checks whether it is time to stop the Service. If it is the last start request, indicated when mCommandCount <= 0, we stop the Service.

Bound Service

A bound Service defines a communication interface. The communication interface is defined as a set of methods that the Service implements and executes in the Service process. The client components can bind to a Service through Context.bindService. Multiple client components can bind to a Service—and invoke methods in it—simultaneously. Service keeps a reference counter on the number of bound components, and when the reference counter is decremented to zero, the Service is destroyed. A client component can terminate a binding explicitly with Context.unbindService, but the binding is also terminated by the runtime if the client component lifecycle ends. public class EatService extends Service { @Override public IBinder onBind(Intent intent) { /* Return communication interface */ } @Override public boolean onUnbind(Intent intent) { /* Last component has unbound */ } } onBind Called when a client binds to a Service the first time through Context.bindSer vice. The invocation supplies an Intent and the method returns an IBinder implementation that the client can use to interact with the Service. onUnbind Called when all bindings are unbound. A bound Service returns an IBinder implementation from onBind that the client can use as a communication channel.

Local Binding

Local binding to a Service is the most common type. public class BoundLocalService2 extends Service { private final ServiceBinder mBinder = new ServiceBinder(); private final TaskExecutor executor = new TaskExecutor(); public interface OperationListener { 1 public void onOperationDone(int i); } public IBinder onBind(Intent intent) { return mBinder; } @Override public void onCreate() { super.onCreate(); } public class ServiceBinder extends Binder { public BoundLocalService2 getService() { return BoundLocalService2.this; } } public int doLongSyncOperation() { return longOperation(); } public void doLongAsyncOperation(final OperationListener listener) { 2 executor.execute(new Runnable() { @Override public void run() { int result = longOperation(); listener.onOperationDone(result); 3 } }); } private int longOperation() { SystemClock.sleep(10000); return 42; } public class TaskExecutor implements Executor { @Override public void execute(Runnable runnable) { new Thread(runnable).start(); } } } 1 Callback listener to report the result to the client. 2 Communication interface published to the client. 3 The thread holds a reference to the listener that is defined in the binding client component. This poses a risk for a memory leak of the object tree that the listener references in the client. A BoundLocalActivity, which invokes BoundLocalService and its executor, defines an implementation of the OperationListener to retrieve the result from the background execution and use it on the UI thread, which is a common use case. In the BoundLocalService, the background thread references the listener; anything referenced in the listener cannot be garbage collected while the thread is running. Hence, our BoundLocalActivity defines a static inner class ServiceListener with weak references to the Activity: public class BoundLocalActivity2 extends Activity { private TextView tvStatus; private LocalServiceConnection mLocalServiceConnection = new LocalServiceConnection(); private boolean mIsBound; private BoundLocalService2 mBoundLocalService; private static class ServiceListener implements BoundLocalService2.OperationListener { private WeakReference<BoundLocalActivity2> mWeakActivity; public ServiceListener(BoundLocalActivity2 activity) { this.mWeakActivity = new WeakReference<BoundLocalActivity2>(activity); } @Override public void onOperationDone(final int someResult) { final BoundLocalActivity2 localReferenceActivity = mWeakActivity.get(); if (localReferenceActivity != null) { localReferenceActivity.runOnUiThread(new Runnable(){ @Override public void run() { localReferenceActivity.tvStatus.setText(Integer.toString(someResult)); } }); } } } public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_bound_local_service_sync_client); tvStatus = (TextView) findViewById(R.id.text_status); bindService(new Intent(BoundLocalActivity2.this, BoundLocalService2.class), mLocalServiceConnection, Service.BIND_AUTO_CREATE); mIsBound = true; } @Override protected void onDestroy() { super.onDestroy(); if (mIsBound) { try { unbindService(mLocalServiceConnection); mIsBound = false; } catch (IllegalArgumentException e) { // No bound service } } } public void onClickExecuteOnClientUIThread(View v) { if (mBoundLocalService != null) { mBoundLocalService.doLongAsyncOperation(new ServiceListener(this)); } } private class LocalServiceConnection implements ServiceConnection { @Override public void onServiceConnected(ComponentName componentName, IBinder iBinder) { mBoundLocalService = ((BoundLocalService2.ServiceBinder)iBinder).getService(); } @Override public void onServiceDisconnected(ComponentName componentName) { mBoundLocalService = null; } } }

Choosing an Asynchronous Technique

Asynchronous task execution in Services can be sequential or concurrent, and can utilize any of the techniques described in this book. However, for two use cases, alternatives may be considered: Sequential execution for a task-controlled service When the tasks are executed sequentially— e.g., with a HandlerThread, Executors.newSingleThreadExecutor, or any customized Executor—it is better to use an IntentService because it has built-in support for sequential task execution that terminates the Service when there are no more tasks to execute. AsyncTask with global executor in local process The AsyncTask risks delaying tasks because it has a process global executor.

Summary

A Service runs in the background on the UI thread and it is of great use to offload tasks to background threads.

CHAPTER 12 IntentService

The IntentService executes tasks on a single background thread—i.e., all tasks are executed sequentially. Users of the IntentService trigger the asynchronous execution by passing an Intent with Context.startService. If the IntentService is running, the Intent is queued until the background thread is ready to process it. If the Intent Service is not running, a new component lifecycle is initiated and finishes when there are no more Intents to process. Hence, the IntentService runs only while there are tasks to execute. The background task executor in the IntentService is a Handler Thread. Unlike the default executor in AsyncTask, the IntentSer vice executor is per instance and not per application. So an application can have multiple IntentService instances, where every instance executes tasks sequentially but independent of other Intent Service instances. IntentService subclasses only have to implement the onHandleIntent method, as the following SimpleIntentService shows: public class SimpleIntentService extends IntentService { public SimpleIntentService() { super(SimpleIntentService.class.getName()); setIntentRedelivery(true); <- IntentService shall be restored if the process is killed. } @Override protected void onHandleIntent(Intent intent) { // Called on a background thread } } Clients that want to use the IntentService create a start request with Context.start Service and pass an Intent with data that the service should handle: public class SimpleActivity extends Activity { public void onButtonClick(View v) { Intent intent = new Intent(this, SimpleIntentService.class); intent.setData(Uri.parse(postUrl)); intent.putExtra("data", data); startService(intent); } } There is no need to stop the IntentService with stopSelf, because that is done internally.

Asynchronous Execution in BroadcastReceiver

A BroadcastReceiver is an application entry point—i.e., it can be the first Android component to be started in the process. The start can be triggered from other applications or system services. Either way, the BroadcastReceiver receives an Intent in the onReceive callback, which is invoked on the UI thread. Hence, asynchronous execution is required if any long-running operations shall be executed. However, the BroadcastReceiver component is active only during the execution of onReceive. Thus, an asynchronous task may be left executing after the component is destroyed—leaving the process empty if the BroadcastReceiver was the entry point— which potentially makes the runtime kill the process before the task is finished. The result of the task is then lost. To circumvent the problem of an empty process, the IntentService is an ideal candidate for asynchronous execution from a BroadcastReceiver. Once a start request is sent from the BroadcastReceiver, it is not a problem that onReceive finishes because a new component is active during the background execution.

Prolonged Lifetime with goAsync

As of API level 11, the BroadcastReceiver.goAsync() method is available to simplify asynchronous execution. It keeps the state of the asynchronous result in a BroadcastReceiver.PendingResult and extends the lifetime of the broadcast until the BroadcastReceiver.PendingResult is explicitly terminated with finish, which can be called after the asynchronous execution is done. BroadcastReceiver is kept alive until the PendingResult is finished: public class AsyncReceiver extends BroadcastReceiver { public void onReceive(Context context, Intent intent) { final PendingResult result = goAsync(); new Thread() { public void run() { // Do background work result.finish(); } }.start(); } }

IntentService Versus Service

The IntentService is appealingly simple to use. However, the simplicity comes with limitations, and a Service may be preferred: Control by clients When you want the lifecycle of the component to be controlled by other components, choose a user-controlled Service. Concurrent task execution To execute tasks concurrently, starting multiple threads in Service. Sequential and rearrangeable tasks Tasks can be prioritized so that the task queue can be bypassed. For example, a music service that is controlled by buttons—play, pause, rewind, fast forward, stop, etc.—would typically prioritize a stop request so that it is executed prior to any other tasks in the queue. This requires a Service.

Summary

The IntentService is an easy-to-use, sequential task processor that is very useful for offloading operations not only from the UI thread, but also from other originating components. Other sequential task processors discussed in this book, such as HandlerThread, Executors.newSingleThreadExecutor, and to some extent AsyncTask, can be compared to the IntentService, but IntentService has the advantage of running as an independent component, which the others do not.

CHAPTER 13 Access ContentProviders with AsyncQueryHandler

AsyncQueryHandler is a utility class that specializes in handling CRUD (Create, Read, Update, and Delete) operations on a ContentProvider asynchronously. The operations are executed on a separate thread, and when the result is available, callbacks are invoked on the initiating thread.

Brief Introduction to ContentProvider

A ContentProvider is an abstraction of a data source that can be accessed uniformly within the application or from other applications running in separate processes. The ContentProvider exposes an interface where data can be read, added, changed, or deleted through a database-centric CRUD approach with four access methods. The definition is done in the AndroidManifest, with an authority that identifies the provider. <provider android:name="EatContentProvider" android:authorities="com.eat.provider" android:exported="true"> The access methods can be invoked through the ContentResolver class, which identifies the ContentProvider through a unique Uri that it defines in a syntax like content://com.eat.provider/resource. The ContentResolver contains the same range of data access methods as the provider: query, insert, delete, and update. When called, these methods invoke the corresponding provider methods. For example, EatContentProvider.query(…) is invoked when the query method of a resolver with the correct Uri is called: public final static Uri CONTENT_URI= Uri.parse("content://com.eat.provider/resource"); ContentResolver cr = getContentResolver(); Cursor c = cr.query(CONTENT_URI, null, null, null, null);

Justification for Background Processing of a ContentProvider

A ContentProvider cannot control how many clients will access the data or if it can happen simultaneously. The encapsulated data of a provider can be accessed concurrently from multiple threads, which can both read and write to the data set. Consequently, concurrent access to a provider can lead to data inconsistencies unless the provider is thread safe. Thread safety can be achieved by applying synchronization to the query, insert, update, and delete data access methods, but it is required only if the data source needs it. SQLite database access, for example, is thread safe in itself because the transaction model of the database is sequential, so that the data can not be corrupted by concurrent accesses. Access to a ContentProvider commonly involves interaction with persistant storage— database or file—so it should not be executed on the UI thread because it may become a long task that can delay UI rendering. The provider implementation is invoked on the same thread as the caller of the ContentResolver if the call originates from a component in the same application process. If, however, the ContentProvider is called from another process, the provider implementation is invoked on binder threads instead. Providers should not be accessed directly from the UI thread, asynchronous mechanisms are required. Execution must be processed on a background thread and the result must be communicated back to the UI thread.

Using the AsyncQueryHandler

AsyncQueryHandler is an abstract class that simplifies asynchronous access to ContentProviders by handling the ContentResolver, background execution, and the message passing between threads. The AsyncQueryHandler contains four methods that wrap the provider operations of a ContentResolver: final void startDelete(int token, Object cookie, Uri uri, String selection, String[] selectionArgs) final void startInsert(int token, Object cookie, Uri uri, ContentValues initialValues) final void startQuery(int token, Object cookie, Uri uri, String[] projection, String selection, String[] selectionArgs, String orderBy) final void startUpdate(int token, Object cookie, Uri uri, ContentValues values, String selection, String[] selectionArgs) Each method wraps the equivalent ContentResolver method and executes the request on a background thread. When the provider operation finishes, it reports the result back to the AsyncQueryHandler, which invokes the following callbacks that the implementation should override. public class EatAsyncQueryHandler extends AsyncQueryHandler{ public EatAsyncQueryHandler(ContentResolver cr) { super(cr); } @Override protected void onDeleteComplete(int token, Object cookie, int result) { ... } @Override protected void onUpdateComplete(int token, Object cookie, int result) { ... } @Override protected void onInsertComplete(int token, Object cookie, Uri result) { ... } @Override protected void onQueryComplete(int token, Object cookie, Cursor result) { ... } } The first two arguments of the calls and callbacks are used as follows: Cookie Request identifier and data container of any object type. Token Request type, which defines the kind if requests that can be made. It also identifies the requests so that unprocessed requests can be cancelled. Thus, if the caller issues cancelOperation(token), unprocessed requests that were submitted with that token will not start processing. However, the cancellation will not affect requests that already started. The AsyncQueryHandler can be created and invoke provider operations on any thread, but it is most commonly used in the UI thread. The callbacks are, however, always called on the thread that created the AsyncQueryHandler. !!! AsyncQueryHandler cannot be used for asynchronous interaction with the SQLite database directly. Instead, the database should be wrapped in a ContentProvider that can be accessed through a ContentResolver.

Limitations

The simplicity of an AsyncQueryHandler is an advantage, but it has been around since API level 1, without being updated for later additions to the Android platform. Hence, there are some newer functions that require a more general asynchronous handling, using one of the previously discussed techniques in this book: Batch operations API level 5 added ContentProviderOperation to support batch operations on providers—a set of insertions that can be executed atomically in one transaction to avoid multiple transactions for a larger data set. CancellationSignal API level 16 added the possibility of cancelling ContentResolver queries with the help of CancellationSignal, but it’s not supported by the AsyncQueryHandler, so it should still use cancelOperation(token).

CHAPTER 14 Automatic Background Execution with Loaders

The Loader framework offers a robust way to run asynchronous operations with content providers or other data sources. The framework can load data asynchronously and deliver it to your application when content changes or is added to the data source. The Loader framework was added to the Android platform in Honeycomb (API level 11), along with the compatibility package. You can connect to the Loader framework from an Activity or a Fragment Some of the features offered by the Loader framework are: Asynchronous data management The loader reacts in the background to the data source and triggers a callback in your app when the data source has new data. Lifecycle management When your Activity or Fragment stops, its loader stops as well. Furthermore, loaders that are running in the background continue to do their work after configuration changes, such as an orientation change. Cached data If the result of an asynchronous data load can’t be delivered, it is cached so that it can be delivered when there is a recipient ready—e.g., when an Activity is recreated due to a configuration change. Leak protection If an Activity undergoes a configuration change, the Loader framework ensures that the Context object is not lost to a leak. The framework operates only on the Application context so that major thread-related leaks don’t occur. !!! As we have seen, loaders that are running when the Activity undergoes a configuration change are kept alive so they can run again with a new Activity. Because the loaders are preserved, they could cause a memory leak. All callbacks—most importantly, the delivery of data—are reported on the UI thread.

Loader Framework

The Loader framework is an API in the android.app-package that contains the 1) LoaderManager, 2) Loader, 3) AsyncTaskLoader, 4) CursorLoader classes.
The only concrete loader in the platform is the CursorLoader, whereas customized loaders can be implemented by extending the AsyncTaskLoader.

LoaderManager

The LoaderManager is an abstract class that manages all loaders used by an Activity or a Fragment. A client holds one LoaderManager instance, which is accessible through the Activity or Fragment class: LoaderManager getLoaderManager(); The LoaderManager API primarily consists of four methods: Loader<D> initLoader(int id, Bundle args,LoaderCallbacks<D> callback) Loader<D> restartLoader(int id, Bundle args, LoaderCallbacks<D> callback) Loader<D> getLoader(int id) void destroyLoader(int id) Every loader should have a unique identifier. Typically, an application only has to call initLoader or restartLoader to start the loader. Clients interact with the LoaderManager via the LoaderManager.LoaderCallbacks interface, which must be implemented by the client. Skeleton example of a typical loader setup with callbacks public class SkeletonActivity extends Activity implements LoaderManager.LoaderCallbacks<D> { private static final int LOADER_ID = 0; public void onCreate(Bundle savedInstanceState) { getLoaderManager().initLoader(LOADER_ID, null, this); } // LoaderCallback methods public Loader<D> onCreateLoader(int id, Bundle args) { /* TODO: Create the loader. */ } public void onLoadFinished(Loader<D> loader, D data) { /* TODO: Use the delivered data. */ } public void onLoaderReset(Loader<D> loader) { /* TODO: The loader data is invalid, stop using it. */ } } SkeletonActivity initializes the loader in onCreate, which tells the framework to invoke the first callback in the code, onCreateLoader(). In that callback, the client should return a loader implementation that will be managed by the platform. Once the loader is created, it initiates data loading. The result is returned in onLoadFinished() on the UI thread so that the client can use the result to update the UI components with the latest data. When a previously created loader is no longer available, onLoadReset() is invoked, after which the data set being handled by the loader is invalidated and shouldn’t be used anymore. When a client changes state—through Activity.onStart(), Activity.onStop(), etc. —the LoaderManager is triggered internally so that the application doesn’t have to manage any loaders’ lifecycles itself. For example, an Activity that starts will initiate a data load and listen for content changes. When the Activity stops, all the loaders are stopped as well, so that no more data loading or delivery is done. The client can explicitly destroy a loader through destroyLoader(id) if it wants to stay active but doesn’t need the data set any more.

initLoader vs restartLoader

The LoaderManager initializes a loader with either initLoader() or restartLoader(), which have the same argument list: id A loader identifier, which must be unique for all loaders within the same client. args A set of input data to the loader, packaged in a Bundle. This parameter can be null if the client has no input data. callback A mandatory implementation of the LoaderCallback interface, which contains callback methods to be invoked by the framework. • initLoader() reuses an available loader if the identifier matches. • restartLoader() does not reuse loaders. initLoader should be chosen when the underlying data source is the same throughout a client lifecycle; e.g., an Activity that observes the same Cursor data from a content provider. If, however, the underlying data source can vary during a client lifecycle, restartLoader should be used. A typical variation would be to change the query to a database, in which case previously loaded Cursor instances are obsolete, and a new data load that can return a new Cursor should be initiated.

LoaderCallbacks

These are mandatory interfaces that set up or tear down communication between the LoaderManager and the client. The interface consists of three methods: public Loader<D> onCreateLoader(int id, Bundle args) public void onLoadFinished(Loader<D> loader, D data) public void onLoaderReset(Loader<D> loader) The callbacks are triggered depending on the loader events. The normal sequence of events is: Loader initialization Typically, the client initializes the loader when creating it so that it can start the background data loading as soon as possible. Loader initialization is triggered through LoaderManager.initLoader(), passing a unique identifier for the loader to be initialized. If there is no loader available with the requested identifier, the onCreateLoader—callback is invoked so that the client can create a new loader and return it to the LoaderManager. If the client requests initialization on an existing loader identifier, there is no need to create a new loader. Instead, the existing loader will deliver the last loaded result by invoking the client’s onLoadFinished callback. Data loading The framework can initiate new data loading when the data source has updated its content or when the client becomes ready for it. The client itself can also force a new data load by calling Loader.forceLoad(). In any case, the result is delivered to the LoaderManager, which passes on the result by calling the client’s onLoadFinished callback. Clients can also cancel initiated loads with Loader.cancelLoad(). Loader reset A loader is destroyed when the client is destroyed or when it calls LoaderManager.destroyLoader(id). The client is notified of the destruction through the onLoaderReset(Loader) callback. At this point, the client may want to free up the data that was previously loaded if it should’t be used anymore.

AsyncTaskLoader

The loader asynchronous execution environment is provided by the AsyncTaskLoader, which extends the Loader class. The class contains an AsyncTask to process the background loading, and relies on the AsyncTask.executeOnExecutor() method for background execution. The AsyncTaskLoader tries to keep the number of simultaneous tasks—i.e., active threads—to a minimum. In practice, this means that calling forceLoad() repeatedly before previous loads are finished will postpone the delivery of the result until the last invoked load is done. Loads can also be triggered by content changes, and if the underlying data set triggers many change notifications—e.g., many inserts in a content provider—the UI thread may receive many onLoadFinished invocations on the UI thread, where the UI components are updated and redrawn. Hence, the UI thread may loose responsiveness due to the many updates. If you think this will be a problem, you can throttle the data delivery from the AsyncTaskLoader so that consecutive data loads only occur after a certain delay. Set the throttling delay through setUpdateThrottle(long delayMs).

Painless Data Loading with CursorLoader

The CursorLoader can be used only with Cursor objects delivered from content providers, not those that come from SQLite databases. The CursorLoader is an extension of the abstract AsyncTaskLoader class that implements the asynchronous execution. CursorLoader monitors Cursor objects that can be queried from a content provider. In other words, it is a loader with a Cursor data type and is passed to the LoaderManager through calls such as Loader<Cursor>. The CursorLoader registers a ContentObserver on the Cursor to detect changes in the data set. Constructor: CursorLoader(Context context, Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) The Cursor lifecycle is managed by the CursorLoader. Clients should not interfere with this internal lifecycle management and try to close the Cursor themselves.

Adding CRUD Support

Loaders are intended to read data, but for content providers, it is often a requirement to also create, update, and delete data, which isn’t what the CursorLoader supports. Still, the content observation and automatic background loading also brings simplicity to a full CRUD solution. You do, however, need a supplemantary mechanism for handling the writing to the provider, such as an AsyncQueryHandler, as the following example shows.

Example: Use CursorLoader with AsyncQueryHandler

In this example, we create a basic manager for the Chrome browser bookmarks stored in the content provider. The example consists of an Activity that shows the list of stored bookmarks and a button that opens a Fragment where new bookmarks can be added. If the user long-clicks on an item, it is directly deleted from the list. Consequently, the bookmark manager invokes three provider operations that should be handled asynchronously: List bookmarks Use CursorLoader to query the provider, so that we can utilize the feature of content observation and automatic data loading. Add or delete a bookmark Use AsyncQueryHandler to insert new bookmarks from the fragment and delete bookmarks when list items are long clicked. public class ChromeBookmarkActivity extends Activity implements LoaderManager.LoaderCallbacks<Cursor> { // Definition of bookmark access information. public interface ChromeBookmark { final static int ID = 1; final static Uri URI= Uri.parse("content://com.android.chrome.browser/bookmarks"); 1 final static String[] PROJECTION = { Browser.BookmarkColumns._ID, Browser.BookmarkColumns.TITLE, Browser.BookmarkColumns.URL }; } // AsyncQueryHandler with convenience methods for insertion and deletion of bookmarks. public static class ChromeBookmarkAsyncHandler extends AsyncQueryHandler { public ChromeBookmarkAsyncHandler(ContentResolver cr) { super(cr); } public void insert(String name, String url) { ContentValues cv = new ContentValues(); cv.put(Browser.BookmarkColumns.BOOKMARK, 1); cv.put(Browser.BookmarkColumns.TITLE, name); cv.put(Browser.BookmarkColumns.URL, url); startInsert(0, null, ChromeBookmark.URI, cv); } public void delete(String name) { String where = Browser.BookmarkColumns.TITLE + "=?"; String[] args = new String[] { name }; startDelete(0, null, ChromeBookmark.URI, where, args); } } ListView mListBookmarks; SimpleCursorAdapter mAdapter; ChromeBookmarkAsyncHandler mChromeBookmarkAsyncHandler; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_bookmarks); mListBookmarks = (ListView) findViewById(R.id.list_bookmarks); mChromeBookmarkAsyncHandler = new ChromeBookmarkAsyncHandler(getContentResolver()); initAdapter(); getLoaderManager().initLoader(ChromeBookmark.ID, null, this); } private void initAdapter() { mAdapter = new SimpleCursorAdapter(this, android.R.layout.simple_list_item_1, null, new String[] { Browser.BookmarkColumns.TITLE }, new int[] { android.R.id.text1}, 0); mListBookmarks.setAdapter(mAdapter); mListBookmarks.setOnItemLongClickListener(new AdapterView.OnItemLongClickListener() { @Override public boolean onItemLongClick(AdapterView<?> adapterView, View view, int pos, long id) { Cursor c = ((SimpleCursorAdapter) adapterView.getAdapter()).getCursor(); c.moveToPosition(pos); int i = c.getColumnIndex(Browser.BookmarkColumns.TITLE); mChromeBookmarkAsyncHandler.delete(c.getString(i)); 2 return true; } }); } @Override public Loader<Cursor> onCreateLoader(int i, Bundle bundle) { return new CursorLoader(this, ChromeBookmark.URI, ChromeBookmark.PROJECTION, null, null, Browser.BookmarkColumns.TITLE + " ASC"); 3 } @Override public void onLoadFinished(Loader<Cursor> loader, Cursor newCursor) { mAdapter.swapCursor(newCursor); } @Override public void onLoaderReset(Loader loader) { mAdapter.swapCursor(null); } public void onAddBookmark(View v) { FragmentTransaction ft = getFragmentManager().beginTransaction(); Fragment prev = getFragmentManager().findFragmentByTag("dialog"); // Remove previous dialogs if (prev != null) { ft.remove(prev); } ft.addToBackStack(null); // Create and show the dialog. DialogFragment newFragment = EditBookmarkDialog.newInstance(mChromeBookmarkAsyncHandler); newFragment.show(ft, "dialog"); } } 1 Provider Uri for the Chrome browser bookmarks. 2 Asynchronous deletion of bookmarks. 3 Use a CursorLoader for asynchronous data retrieval. New bookmarks are added via an EditBookmarkDialog that contains a button and two input fields: one for the bookmark name and one for the bookmark URL. When the button is pressed, the bookmark name and URL are inserted in the provider and the dialog is dismissed: public class EditBookmarkDialog extends DialogFragment { static EditBookmarkDialog newInstance(ChromeBookmarkActivity.ChromeBookmarkAsyncHandler asyncQueryHandler) { EditBookmarkDialog dialog = new EditBookmarkDialog(asyncQueryHandler); return dialog; } ChromeBookmarkActivity.ChromeBookmarkAsyncHandler mAsyncQueryHandler; public EditBookmarkDialog(ChromeBookmarkActivity.ChromeBookmarkAsyncHandler asyncQueryHandler) { mAsyncQueryHandler = asyncQueryHandler; } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View v = inflater.inflate(R.layout.dialog_edit_bookmark, container, false); final EditText editName = (EditText) v.findViewById(R.id.edit_name); final EditText editUrl = (EditText) v.findViewById(R.id.edit_url); Button buttonSave = (Button) v.findViewById(R.id.button_save); buttonSave.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { String name = editName.getText().toString(); String url = editUrl.getText().toString(); mAsyncQueryHandler.insert(name, url); 1 dismiss(); } }); return v; } } 1 Insert the bookmark asynchronously in the provider. nce a bookmark has been inserted or deleted via the ChromeBookmarkAsyncHandler, the content changes and the CursorLoader automatically requeries the Cursor. Hence, insertions and deletions are automatically updated in the list.

Implementing Custom Loaders

Loaders are most commonly used with content providers because they are already supported by the platform, but other data sources can be handled with custom loaders. The loader delivers data only in the started state—i.e., after Loader.startLoading() has been invoked, which changes the state and initiates a data load. To avoid leaking outer class objects referenced from inner classes— typically Activity and Fragment—a custom loader has to be declared as a static or external class. If you don’t do this, a RuntimeException is thrown when the loader is returned from onCreateLoader.

Loader Lifecycle

Reset The initial and final state of a loader, where it has released any cached data. Started Starts an asynchronous data load and delivers the result through a callback invocation of LoaderCallback.onLoadFinished. Stopped The loader stops delivering data to the client. It may still load data in the background on content change, but the data is cached in the loader so that the latest data can be retrieved easily without initiating a new data load. Abandoned Intermediate state before reset, where data is stored until a new loader is connected to the data source. This is rarely used; the LoaderManager abandons loaders on restart so that the data is available while the restart is underway.
The lifecycle of a loader is controlled by the LoaderManager, and clients normally shouldn’t modify the state directly with the Loader methods. Instead, the client should issue initLoader and restartLoader to ensure that there is a started loader and leave it up to the LoaderManager to interact with the loader.

Background Loading

Data should be loaded asynchronously on a background thread, and it’s up to the loader to choose execution environment. Normally, the choice is not difficult; the platform provides the AsyncTaskLoader, which can be extended by custom loaders to facilitate the offloading from the UI thread. An implementation of AsyncTaskLoader must only override one method: public D loadInBackground() { ... } loadInBackground is called on a background thread and should execute the long task of the loader and return content from the load.

Example: Simple custom loader

The following is a basic loader that extends AsyncTaskLoader to load an integer value from a dummy data source. public class BasicLoader extends AsyncTaskLoader<Integer>{ private static final String TAG = "BasicLoader"; public BasicLoader(Context context) { super(context); } @Override protected boolean onCancelLoad() { Log.d(TAG, "onCancelLoad"); return super.onCancelLoad(); } @Override protected void onStartLoading() { super.onStartLoading(); forceLoad(); 1 } @Override public Integer loadInBackground() { return loadData(); 2 } private int loadData() { SystemClock.sleep(3000); Random rand = new Random(); int data = rand.nextInt(50); Log.d(TAG, "loadData - data = " + data); return data; } } 1 When the client calls startLoading(), the loader changes state to started and invokes the onStartLoading(), where the custom loader should trigger a new load—i.e., calling forceLoad(). 2 Load a long-running task on the background thread and return the result. BasicActivity is a client that uses BasicLoader to load integer values and display them: public class BasicActivity extends Activity implements LoaderManager.LoaderCallbacks<Integer>{ private static final int BASIC_LOADER_ID = 0; TextView tvResult; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_basic); tvResult = (TextView) findViewById(R.id.text_result); getLoaderManager().initLoader(BASIC_LOADER_ID, null, this); } @Override public Loader<Integer> onCreateLoader(int id, Bundle args) { return new BasicLoader(this); } @Override public void onLoadFinished(Loader<Integer> loader, Integer data) { tvResult.setText(Integer.toString(data)); } @Override public void onLoaderReset(Loader<Integer> loader) { // Empty } public void onLoad(View v) { getLoaderManager().getLoader(BASIC_LOADER_ID).forceLoad(); } public void onCancel(View v) { getLoaderManager().getLoader(BASIC_LOADER_ID).cancelLoad(); } } The BasicLoader executes the long task on the background thread and will be attached to the client lifecycle, but apart from that, it lacks most of the nice features expected from a loader. There is no data cache, for example, so the loader will reload the same data every time the client is recreated instead of returning a cached value.

Content Management

When the underlying data set changes, the loader should automatically initiate new background data loads. Consequently, the underlying data set has to be observable, in the same way as the CursorLoader utilizes a ContentObserver to get notified about updates in the content provider. The observer mechanism depends on the underlying data set, but typical mechanisms are: 1) Observable and Observer 2) Broadcasted intent to a BroadcastReceiver 3) FileObserver When the observer receives an update notification, it is up to the loader to load the new data asynchronously, which should be done either with forceLoad or onContentChanged. forceLoad triggers a background execution independent of the loader’s state, whereas onContentChanged initiates data loading only if the state is started. A custom loader should check whether there is content to load with takeContentChanged() when the loader is started: @Override protected void onStartLoading() { super.onStartLoading(); // Note: There are other interesting things to // implement here as well. if (takeContentChanged()) { forceLoad(); } } Content observation should be active from the time the loader is started until it’s reset, so that it can continue to do background loading even in the stopped state and deliver new data from the cache, as described in the next section.

Delivering Cached Results

When you subclass AsyncTaskLoader, it delivers results after new data has been returned from loadInBackground(). But triggering a new background task—i.e., calling loadInBackground—when there is no new data is a waste of resources. Instead, you should implement the loader so that it can speed up the result delivery to the clients. For example, if the loader has delivered a result and no content change has been reported, there is no point in starting a new background task. It’s faster to return the previous result directly. Consequently, the loader should cache loaded data—i.e., keep the result of the last successful load. The delivery control is implemented with a result cache. You also override Loader.deliverResult(<D>), which delivers data to the clients when it is invoked if the loader is started, as shown in the following example.

Example: Custom File Loader

The file system should be accessed asynchronously and the loader framework can be used to load new data as soon as any changes occur. In this example, we create a FileLoader that delivers the names of the files in the application directory. It observes the directory for changes, and if a file is added or deleted, an asynchronous load will be initiated that will deliver a list of file names to the client on the UI thread. public class FileLoader extends AsyncTaskLoader<List<String>> { // Cache the list of file names. private List<String> mFileNames; // Data observation private class SdCardObserver extends FileObserver {1 public SdCardObserver(String path) { super(path, FileObserver.CREATE|FileObserver.DELETE); } @Override public void onEvent(int event, String path) { // Report that a content change has occurred. // This call will force a new asynchronous data load if the loader is started // otherwise it will keep a reference that the data has changed for future loads. onContentChanged(); } } private SdCardObserver mSdCardObserver; public FileLoader(Context context) { super(context); String path = context.getFilesDir().getPath(); mSdCardObserver = new SdCardObserver(path); } /** * Decide whether a load should be initiated or not. */ @Override protected void onStartLoading() { super.onStartLoading(); // Start observing the content. mSdCardObserver.startWatching(); 2 if (mFileNames != null) { 3 // Return the cache deliverResult(mFileNames); } // Force a data load if there are no previous data // or if the content has been marked as changed earlier but not delivered. if (takeContentChanged() || mFileNames == null) { 4 forceLoad(); } } @Override public List<String> loadInBackground() { File directory = getContext().getFilesDir(); return Arrays.asList(directory.list()); } @Override public void deliverResult(List<String> data) { if (isReset()) { return; } // Cache the data mFileNames = data; // Only deliver result if the loader is started. if (isStarted()) { super.deliverResult(data); } } @Override protected void onStopLoading() { super.onStopLoading(); cancelLoad(); 5 } @Override protected void onReset() { super.onReset(); mSdCardObserver.stopWatching(); 6 clearResources(); 7 } private void clearResources() { mFileNames = null; } } 1 Define a filesystem observer for the addition and removal of files. The constructor of the FileLoader configures it to observe the application file directory—retrieved with getContext().getFilesDir(). When changes are detected, the onEvent method will be invoked. The file observation is handled by the android specific android.os.FileObserver class. 2 The FileLoader is told to start loading data, typically when the Activity or Fragment is started and is ready to display the data. At this point, the loader is started and the FileLoader is expected to observe the underlying data set— i.e., the filesystem—so startWatching is invoked. 3 If a previously delivered data set is cached in the loader, we deliver that to the client so that we don’t need to do another asynchronous load. 4 Force a data load if there is no previous data or if the content has been marked as changed earlier but not delivered. 5 Try to cancel an ongoing load, because the result will not be delivered anyway. 6 Stop content observation when the loader is reset, because content changes should not be loaded or cached. 7 When the loader is reset, it is not expected to be used any more, so remove the reference to the cache.

Handling Multiple Loaders

Most commonly, a LoaderManager only manages one loader, in which case the callbacks are invoked from a known loader: only one exists. If you create multiple loaders, the callbacks should check the identifier—i.e., invoke Loader.getId()—to verify which loader has generated the callback. For example: ... private static final int LOADER_ID_ONE = 1; private static final int LOADER_ID_TWO = 2; public void onCreate(Bundle savedInstanceState) { getLoaderManager().initLoader(LOADER_ID_ONE, null, this); getLoaderManager().initLoader(LOADER_ID_TWO, null, this); } ... public void onLoadFinished(Loader<D> loader, D data) { switch(loader.getId()) { case LOADER_ID_ONE: /* TODO: Use the delivered data. */ break; case LOADER_ID_TWO: /* TODO: Use the delivered data. */ break; } } ...

Summary

The Loader framework is the latest asynchronous techniques to be added to the Android platform. It is a framework for asynchronous execution that shines when it comes to the CursorLoader, as it encapsulates the difficulties of a specific use case—i.e., content providers—and solves it efficiently. The framework also offers flexibility by allowing custom loader implementations, but that requires more effort from the application, and it may be better to consider other asynchronous techniques. As a rule of thumb, custom loaders can be a good choice when the following conditions are fulfilled: • The underlying content is easily observable. • There should be an easy way to use a data cache. • Started tasks don’t need to be finished, because the tight coupling of the loader with the Activity and Fragment lifecycles will destroy the attached Loader objects and lose the result. Thus, it isn’t advisable, for example, to use loaders for network requests; they are not easily observed and they will be interrupted based on the client lifecycle. Data loading that should execute until completion should use a Service or IntentService instead.