Team LiB
Previous Section Next Section

15.7 Specific Solution Design Patterns

This section presents more complex design patterns for synchronization and communication. Multiple synchronization primitives can be found in a single design pattern.

15.7.1 Data Transfer with Flow Control

Task-to-task communication commonly involves data transfer. One task is a producer, and the other is a data consumer. Data processing takes time, and the consumer task might not be able to consume the data as fast as the producer can produce it. The producer can potentially overflow the communication channel if a higher priority task preempts the consumer task. Therefore, the consumer task might need to control the rate at which the producer task generates the data. This process is accomplished through a counting semaphore, as shown in Figure 15.17. In this case, the counting semaphore is a permission to produce data.

Click To expand
Figure 15.17: Using counting semaphores for flow control.

The data buffer in this design pattern is different from an RTOS-supplied message queue. Typically, a message queue has a built-in flow control mechanism. Assume that this message buffer is a custom data transfer mechanism that is not supplied by the RTOS.

As shown in Figure 15.17, task #1 is the data producer, while task #2 is the consumer. Task #1 can introduce data into the buffer as long as the task can successfully acquire the counting semaphore. The counting semaphore may be initialized to a value less than the maximum allowable token value. Task #2 can increase the token value with the give operation and may decrease the token value by the take operation depending on how fast the task can consume data. Listing 15.2 shows the pseudo code for this design pattern.

Listing 15.2: Pseudo code for data transfer with flow control.
Start example

Acquire(Counting_Semaphore)

Consume data from MsgQueue

Produce data into msgQueue

Give(Counting_Semaphore)

data producing task

data consuming task

End example

15.7.2 Asynchronous Data Reception from Multiple Data Communication Channels

Commonly, a daemon task receives data from multiple input sources, which implies that data arrives on multiple message queues. A task cannot block and wait for data on multiple message queues. Therefore, in such cases, multiple sources may use a single semaphore to signal the arrival of data. A task cannot block and wait on multiple semaphores either.

The task blocks and waits on the semaphore. Each ISR inserts data in the corresponding message queue followed by a give operation on the semaphore.

As shown in Figure 15.18, a single interrupt lock is sufficient to protect against multiple interrupt sources, as long as the masked interrupt level covers these sources. Both the interrupt service routines use a single semaphore as the signal channel.

Click To expand
Figure 15.18: Task waiting on multiple input sources.

Listing 15.3 shows the code that the task runs when multiple input message queues are present. Note that the semaphore used in this case is a binary semaphore.

Listing 15.3: Pseudo code for task waiting on multiple input sources.
Start example
while (Get(Binary_Semaphore))
        disable(interrupts)
        for (each msgQueue)
               get msgQueueLength
               for (msgQueueLength)
                      remove a message
                      enable(interrupts)
                      process the message
                      disable(interrupts)
               endfor
        endfor
        enable(interrupts)
end while
End example

Some RTOS kernels do not have the event-register object. Implementing the event register using the common basic primitives found in the majority of the RTOS kernels can be quite useful when porting applications from one RTOS to another.

The event-register object can be implemented using a shared variable, an interrupt lock, and a semaphore. The shared variable stores and retrieves the events. The interrupt lock guards the shared variable because ISRs can generate events through the event register. The semaphore blocks the task wanting to receive desired events.

Event_Receive(wanted_events)
{
       task_cb.wanted_events = wanted_events
       While (TRUE)
               Get(task_cb.event_semaphore)
               disable(interrupts) 
               events = wanted_events XOR task_cb.recvd_events
               task_cb.wanted_events = task_cb.wanted_event AND (NOT events)
               enable(interrupts) 
               If (events is not empty)
                     return (events)
               endIf 
       EndWhile
}

The variable task_cb refers to the task control block, in which the kernel keeps its private, task-specific information. Note that the unwanted events are not cleared because the task can call event_receive some time later.

Event_Send(events)
{
       disable(interrupts)
       task_cb.recvd_events = task_cb.recvd_events OR events
       enable(interrupts) 
       Give(task_cb.event_semaphore)
}

15.7.3 Multiple Input Communication Channels

A daemon task usually has multiple data input sources and multiple event input sources, as shown in Figure 15.19. Consider a daemon task that processes data from an I/O device and has a periodic timer, which is used for recovery if the device is stuck in an inconsistent state. The system timer ISR signals the periodic timer event; this event does not carry data. In such situations, an event register combined with a counting semaphore is a much better alternative than using counting semaphores alone for signaling (see Figure 15.10).

Click To expand
Figure 15.19: Task with multiple input communication channels.

With an event register, each event bit is pre-allocated to a source. In this design pattern, one event bit is assigned to the I/O task #1 and another bit is assigned to the timer ISR. The task blocks on an event register, and an event from either source activates the task. The I/O task first inserts the data associated with an I/O device into the message queue. Then the I/O task signals this event to the task by setting the event's assigned bit in the event register. The timer ISR sets the event bit; this event is no more than a tick announcement to the task. After the task resumes execution, it performs the appropriate action according to the event-register state.

Because the event register is only used as a signaling mechanism, a counting semaphore is used to keep track of the total number of tick occurrences. Listing 15.4 puts this discussion into perspective. The addition of the counting semaphore does not increase the code complexity.

Listing 15.4: Pseudo code for using a counting semaphore for event accumulation combined with an event-register used for event notification.
Start example
while (the_events = wait for events from Event-Register)
        if (the_events & EVENT_TYPE_DEVICE)
                while (Get message from msgQueue)
                        process the message
                endwhile
        endif

        if (the_events & EVENT_TYPE_TIMER)
                counter = 0
                disable(interrupts)
                while (Get(Counting_Semaphore))
                        counter = counter + 1
                endwhile
                enable(interrupts)
                if (counter > 1)
                        recovery time 
                else
                        process the timer tick
                endif
        endif
endwhile
End example

15.7.4 Using Condition Variables to Synchronize between Readers and Writers

The design pattern shown in Figure 15.20 demonstrates the use of condition variables. A condition variable can be associated with the state of a shared resource. In this example, multiple tasks are trying to insert messages into a shared message queue. The predicate of the condition variable is 'the message queue is full.' Each writer task tries first to insert the message into the message queue. The task waits (and is blocked) if the message queue is currently full. Otherwise, the message is inserted, and the task continues its execution path.

Click To expand
Figure 15.20: Using condition variables for task synchronization.

Note the message queue shown in Figure 15.20 is called a 'simple message queue.' For the sake of this example, the reader should assume this message queue is a simple buffer with structured content. This simple message queue is not the same type of message queue that is provided by the RTOS.

Dedicated reader (or consumer) tasks periodically remove messages from the message queue. The reader task signals on the condition variable if the message queue is full, in effect waking up the writer tasks that are blocked waiting on the condition variable. Listing 15.5 shows the pseudo code for reader tasks and Listing 15.6 shows the pseudo code for writer tasks.

Listing 15.5: Pseudo code for reader tasks.
Start example
Lock(guarding_mutex)
Remove message from message queue
If (msgQueue Was Full) 
      Signal(Condition_variable)
Unlock(guarding_mutex)
End example
Listing 15.6: Pseudo code for writer tasks.
Start example
Lock(guarding_mutex)
While (msgQueue is Full)
    Wait(Condition_variable)
Produce message into message queue
Unlock(guarding_mutex)
End example

As Chapter 8 discusses, the call to event_receive is a blocking call. The calling task is blocked if the event register is empty when the call is made. Remember that the event register is a synchronous signal mechanism. The task might not run immediately when events are signaled to it, if a higher priority task is currently executing. Events from different sources are accumulated until the associated task resumes execution. At that point, the call returns with a snapshot of the state of the event register. The task operates on this returned value to determine which events have occurred.

Problematically, however, the event register cannot accumulate event occurrences of the same type before processing begins. The task would have missed all but one timer tick event if multiple timer ticks had occurred before the task resumed execution. Introducing a counting semaphore into the circuit can solve this problem. Soft timers, as Chapter 11 discusses, do not have stringent deadlines. It is important to track how many ticks have occurred. This way, the task can perform recovery actions, such as fast-forwarding time to reduce the drift.

The data buffer in this design pattern is different from an RTOS-supplied message queue. Typically, a message queue has a built-in flow control mechanism. Assume that this message buffer is a custom data transfer mechanism that is not supplied by the RTOS.

Note that the lock call on the guarding mutex is a blocking call. Either a writer task or a reader task is blocked if it tries to lock the mutex while in the locked state. This feature guarantees serialized access to the shared message queue. The wait operation and the signal operation are both atomic operations with respect to the predicate and the guarding mutex, as Chapter 8 discusses.

In this example, the reader tasks create the condition for the writer tasks to proceed producing messages. The one-way condition creation of this design implies that either there are more writer tasks than there are reader tasks, or that the production of messages is faster than the consumption of these messages.

15.7.5 Sending High Priority Data between Tasks

In many situations, the communication between tasks can carry urgent data. Urgent data must be processed in a timely fashion and must be distinguished from normal data. This process is accomplished by using signals and an urgent data message queue, as shown in Figure 15.21. For the sake of this example, the reader should assume the message queues shown in Figure 15.21 do not support a priority message delivery mechanism.

Click To expand
Figure 15.21: Using signals for urgent data communication.

As Chapter 8 describes, one task uses a signal to notify another of the arrival of urgent data. When the signal arrives, the receiving task diverts from its normal execution and goes directly to the urgent data message queue. The task processes messages from this queue ahead of messages from other queues because the urgent data queue has the highest priority. This task must install an asynchronous signal handler for the urgent data signal in order to receive it. The reason the signal for urgent data notification is deploying is because the task does not know of the arrival of urgent data unless the task is already waiting on the message queue.

The producer of the urgent data, which can be either a task or an ISR, inserts the urgent messages into the predefined urgent data message queue. The source signals the recipient of the urgent data. The signal interrupts the normal execution path of the recipient task, and the installed signal handler is invoked. Inside this signal handler, urgent messages are read and processed.

In this design pattern, urgent data is maintained in a separate message queue although most RTOS-supplied message queues support priority messages. With a separate message queue for urgent data, the receiver can control how much urgent data it is willing to accept and process, i.e., a flow control mechanism.

15.7.6 Implementing Reader-Writer Locks Using Condition Variables

This section presents another example of the usage of condition variables. The code shown in Listings 15.7, 15.8, and 15.9 are written in C programming language.

Consider a shared memory region that both readers and writers can access. The example reader-writer lock design has the following properties: multiple readers can simultaneously read the memory content, but only one writer is allowed to write data into the shared memory at any one time. The writer can begin writing to the shared memory when that memory region is not accessed by a task (readers or writers). Readers precede writers because readers have priority over writers in term of accessing the shared memory region.

The implementation that follows can be adapted to other types of synchronization scenarios when prioritized access to shared resources is desired, as shown in Listings 15.7, 15.8, and 15.9.

The following assumptions are made in the program listings:

  1. The mutex_t data type represents a mutex object and condvar_t represents a condition variable object; both are provided by the RTOS.

  2. lock_mutex, unlock_mutex, wait_cond, signal_cond, and broadcast_cond are functions provided by the RTOS. lock_mutex and unlock_mutex operate on the mutex object. wait_cond, signal_cond, and broadcast_cond operate on the condition variable object.

Listing 15.7 shows the data structure needed to implement the reader-writer lock.

Listing 15.7: Data structure for implementing reader-writer locks.
Start example
typedef struct {
       mutex_t      guard_mutex;
       condvar_t    read_condvar;
       condvar_t    write_condvar;
       int          rw_count; 
       int          read_waiting;
} rwlock_t;

rw_count == -1 indicates a writer is active
End example

Listing 15.8 shows the code that the writer task invokes to acquire and to release the lock.

Listing 15.8: Code called by the writer task to acquire and release locks.
Start example
acquire_write(rwlock_t *rwlock)
{
        lock_mutex(&rwlock->guard_mutex);
        while (rwlock->rw_count != 0)
               wait_cond(&rwlock->write_condvar, &rwlock->guard_mutex);
        rwlock->rw_count = -1;
        unlock_mutex(&rwlock->guard_mutex);
}

release_write(rwlock_t *rwlock)
{
        lock_mutex(&rwlock->guard_mutex);
        rwlock->rw_count = 0; 
        if (rwlock->r_waiting)
                broadcast_cond(&rwlock->read_condvar, &rwlock->guard_mutex);
        else
                signal_cond(&rwlock->write_condvar, &rwlock->guard_mutex);
        unlock_mutex(&rwlock->guard_mutex);
}
End example

Listing 15.9 shows the code that the reader task invokes to acquire and release the lock.

Listing 15.9: Code called by the reader task to acquire and release locks.
Start example
acquire_read(rwlock_t *rwlock)
{
       lock_mutex(&rwlock->guard_mutex);
       rwlock->r_waiting++; 
       while (rwlock->rw_count < 0)
              wait_cond(&rwlock->read_condvar, &rwlock->guard_mutex);
       rwlock->r_waiting = 0;
       rwlock->rw_count++;
       unlock_mutex(&rwlock->guard_mutex);
}

release_read(rwlock_t *rwlock)
{
       lock_mutex(&rwlock->guard_mutex);
       rwlock->rw_count --;
       if (rwlock->rw_count == 0)
              signal_cond(&rwlock->write_condvar, &rwlock->guard_mutex);
       unlock_mutex(&rwlock->guard_mutex);
}
End example

In case broadcast_cond does not exist, use a for loop as follows

for (i = rwlock->read_waiting; i > 0; i--) 
   signal_cond(&rwlock->read_condvar, &rwlock->guard_mutex);

Team LiB
Previous Section Next Section