In this unit, we will talk about the execution units within nRF Connect SDK, namely threads, ISRs and we will also talk about the scheduler and its default behavior.
A thread is the smallest logical unit of execution for the RTOS scheduler (covered later in this topic) that is competing for the CPU time.
In nRF Connect SDK, there are two main types of threads: cooperative threads (negative priority value) and preemptible threads (non-negative priority). Cooperative threads have negative priority and have very limited usage. Therefore, they are not within the scope of this course.
A thread can be in one of the following states at any given time.
Running: The running thread is the one that is currently being executed by the CPU. This means that the scheduler of the RTOS has already selected this thread as the one that should get the CPU time and loaded this thread’s context into the CPU registers.
Runnable: A thread is marked as “Runnable” when it has no other dependencies with other threads or other system resources to proceed further in execution. The only resource this thread is waiting for is the CPU time. The scheduler includes these threads in the scheduling algorithm that selects the next thread to run after the current running thread changes its state. This is also known as “Ready” state.
Non-runnable: A thread that has one or more factors that prevent its execution is deemed to be unready, and cannot be selected as the current thread. This can, for example, be because they are waiting for some resource that is not yet available or they have been terminated or suspended. The scheduler does not include these threads in the scheduling algorithm to select the next thread to run. This is also known as “Unready” state.
A system thread is a type of thread that is spawned automatically by Zephyr RTOS during initialization. There are always two threads spawned by default, the main thread and the idle thread.
The main thread executes the necessary RTOS initializations and calls the application’s main() function, if it exists. If no user-defined main() is supplied, the main thread will exit normally, though the system would be fully functional.
The idle thread runs when there is no other work to do, either running an empty loop or, if supported, will activate power management to save power (this is the case for Nordic devices).
In addition to system threads, a user can define their own threads to assign tasks to. For example, a user can create a thread to delegate reading sensor data, another thread to process data, and so on. Threads are assigned a priority, which instructs the scheduler how to allocate CPU time to the thread. We will cover creating user-defined threads in-depth in Exercise 1.
Another common execution unit in nRF Connect SDK is a work item, which is nothing more than a user-defined function that gets called by a dedicated thread called a workqueue thread.
A workqueue thread is a dedicated thread to process work items that are pulled out of a kernel object called a workqueue in a “first in first out” fashion. Each work item has a specified handler function which is called to process the work item. The main use of this is to offload non-urgent work from an ISR or a high-priority thread to a lower priority thread.
A system can have multiple workqueue threads, the default one is known as the system workqueue, available to any application or kernel code. The thread processing the work items in the system workqueue is a system thread, and you do not need to create and initialize a workqueue if submitting work items to the system workqueue.
As you can see in the image above, the ISR or high priority thread submits work into a workqueue, and the dedicated workqueue thread pulls out a work item in a first in, first out (FIFO) order. The thread that pulls work items from the queue always yields after it has processed one work item, so that other equal priority threads are not blocked for a long time.
The advantage of delegating work as a work item instead of a dedicated thread is that since work items are all sharing one stack, the workqueue stack, a work item is lighter than a thread because no stack is allocated.
We will cover work items and workqueue threads in-depth in Exercise 3.
Like anything in the physical world, CPU time is a limited resource, and when an application can have multiple concurrent logics, so it’s not obvious that there is enough CPU time for them all to run concurrently. This is where the scheduler comes in. The scheduler is the part of the RTOS responsible for scheduling which tasks are running, i.e using CPU time, at any given time. It does this using a scheduling algorithm to determine which task should be the next to run.
The number of running threads possible is equal to the number of application cores. For example on the nRF52840, there is one application core, allowing for one running thread at a time.
As we know, the RTOS used by nRF Connect SDK is Zephyr. Zephyr RTOS is by default a tickless RTOS. A tickless RTOS is completely event-driven, which means that instead of having periodic timer interrupts to wake up the scheduler, it is woken based on events known as rescheduling points.
A rescheduling point is an instant in time when the scheduler interrupts the current thread and changes the state of one or more threads based on some conditions. Some examples of rescheduling points are:
When a thread calls k_yield(), the thread’s state is changed from “Running” to “Ready”.
Unblocking a thread by giving/sending a kernel synchronization object like a semaphore, mutex or alert, causes the thread’s state to be changed from “Running” to “Ready”.
When a receiving thread gets new data from other threads using data passing kernel objects, the data receiving thread’s state is changed from “Waiting” to “Ready”.
When time slicing is enabled (covered in Exercise 2) and the thread has run continuously for the maximum time slice time allowed, the thread’s state is changed from “Running” to “Ready”.
Interrupt Service Routines (ISRs) are generated asynchronously by the device drivers and protocol stacks. They are not scheduled. This includes callback functions, which are the application extension of ISRs. It is important to remember that ISRs preempt the execution of the current thread, allowing the response to occur with very low overhead. Thread execution resumes only once all ISR work has been completed. Therefore, it is important to make sure that ISRs, including callback functions, do not contain time-consuming work, or involve blocking functionalities as they will starve all other threads. Work that is time-consuming or involves blocking should be handed off to a thread using work items or other proper mechanisms., as we will see in Exercise 3.