Feedback
Feedback

If you are having issues with the exercises, please create a ticket on DevZone: devzone.nordicsemi.com
Click or drag files to this area to upload. You can upload up to 2 files.

Zephyr RTOS basics

In this unit, we will talk about the execution units within nRF Connect SDK, namely threads, ISRs and we will also talk about the scheduler and its default behavior.

Threads

A thread is the smallest logical unit of execution for the RTOS scheduler (covered later in this topic) that is competing for the CPU time.

In nRF Connect SDK, there are two main types of threads: cooperative threads (negative priority value) and preemptible threads (non-negative priority). Cooperative threads have negative priority and have very limited usage. Therefore, they are not within the scope of this course.

A thread can be in one of the following states at any given time.

Running: The running thread is the one that is currently being executed by the CPU. This means that the scheduler of the RTOS has already selected this thread as the one that should get the CPU time and loaded this thread’s context into the CPU registers.

Runnable: A thread is marked as “Runnable” when it has no other dependencies with other threads or other system resources to proceed further in execution. The only resource this thread is waiting for is the CPU time. The scheduler includes these threads in the scheduling algorithm that selects the next thread to run after the current running thread changes its state. This is also known as “Ready” state.

Non-runnable: A thread that has one or more factors that prevent its execution is deemed to be unready, and cannot be selected as the current thread. This can, for example, be because they are waiting for some resource that is not yet available or they have been terminated or suspended. The scheduler does not include these threads in the scheduling algorithm to select the next thread to run. This is also known as “Unready” state.

System threads

A system thread is a type of thread that is spawned automatically by Zephyr RTOS during initialization. There are always two threads spawned by default, the main thread and the idle thread.

The main thread executes the necessary RTOS initializations and calls the application’s main() function, if it exists. If no user-defined main() is supplied, the main thread will exit normally, though the system would be fully functional.

The idle thread runs when there is no other work to do, either running an empty loop or, if supported, will activate power management to save power (this is the case for Nordic devices).

User-created threads

In addition to system threads, a user can define their own threads to assign tasks to. For example, a user can create a thread to delegate reading sensor data, another thread to process data, and so on. Threads are assigned a priority, which instructs the scheduler how to allocate CPU time to the thread. We will cover creating user-defined threads in-depth in Exercise 1.

Workqueue threads

Another common execution unit in nRF Connect SDK is a work item, which is nothing more than a user-defined function that gets called by a dedicated thread called a workqueue thread.

A workqueue thread is a dedicated thread to process work items that are pulled out of a kernel object called a workqueue in a “first in first out” fashion. Each work item has a specified handler function which is called to process the work item. The main use of this is to offload non-urgent work from an ISR or a high-priority thread to a lower priority thread.

A system can have multiple workqueue threads, the default one is known as the system workqueue, available to any application or kernel code. The thread processing the work items in the system workqueue is a system thread, and you do not need to create and initialize a workqueue if submitting work items to the system workqueue.

Workflow of a workqueue

As you can see in the image above, the ISR or high priority thread submits work into a workqueue, and the dedicated workqueue thread pulls out a work item in a first in, first out (FIFO) order. The thread that pulls work items from the queue always yields after it has processed one work item, so that other equal priority threads are not blocked for a long time.

The advantage of delegating work as a work item instead of a dedicated thread is that since work items are all sharing one stack, the workqueue stack, a work item is lighter than a thread because no stack is allocated.

We will cover work items and workqueue threads in-depth in Exercise 3.

Threads Priority

Threads are assigned an integer value to indicate their priority, which can be either negative or non-negative. Lower numerical values take precedence over higher values, meaning a thread with priority 4 will be given higher priority than a thread with priority 7. Similarly, a thread with priority -2 will have higher priority than both a thread with priority 4 and a thread with priority 7.

The scheduler distinguishes between two types of threads based on their priority: cooperative and preemptible. A thread with a negative priority is classified as a cooperative thread. Once a cooperative thread becomes the current thread, it will remain so until it performs an action that makes it unready.

On the other hand, a thread with a non-negative priority is classified as a preemptible thread. Once a preemptible thread becomes the current thread, it may be replaced at any time if a cooperative thread or a preemptible thread of higher or equal priority becomes ready.

The number of non-negative priorities , which is associated with preemptible threads, is configurable through the Kconfig symbol CONFIG_NUM_PREEMPT_PRIORITIES and is, by default, equal to 15. The main thread has a priority of 0, while the idle thread has a priority of 15 by default.

Similarly, The number of negative priorities, which is associated with cooperative threads, is configurable through the Kconfig symbol CONFIG_NUM_COOP_PRIORITIES and is, by default, equal to 16. We are not covering cooperative threads in the fundamentals course due to their limited usage.

Scheduler

Like anything in the physical world, CPU time is a limited resource, and when an application has multiple concurrent logics, it’s not guaranteed that there would be enough CPU time for all of them to run concurrently. This is where the scheduler comes in. The scheduler is the part of the RTOS responsible for scheduling which tasks are running, i.e using CPU time, at any given time. It does this using a scheduling algorithm to determine which task should be the next to run.

Note

The number of running threads possible is equal to the number of application cores. For example on the nRF52840, there is one application core, allowing for one running thread at a time.

Rescheduling point

As we know, the RTOS used by nRF Connect SDK is Zephyr. Zephyr RTOS is by default a tickless RTOS. A tickless RTOS is completely event-driven, which means that instead of having periodic timer interrupts to wake up the scheduler, it is woken based on events known as rescheduling points.

A rescheduling point is an instant in time when the scheduler gets called to select the thread to run next. Any time the state of the Ready threads changes, a rescheduling point is triggered. Some examples of rescheduling points are:

  • When a thread calls k_yield(), the thread’s state is changed from “Running” to “Ready”.
  • Unblocking a thread by giving/sending a kernel synchronization object like a semaphore, mutex or alert, causes the thread’s state to be changed from “Unready” to “Ready”.
  • When a receiving thread gets new data from other threads using data passing kernel objects, the data receiving thread’s state is changed from “Waiting” to “Ready”.
  • When time slicing is enabled (covered in Exercise 2) and the thread has run continuously for the maximum time slice time allowed, the thread’s state is changed from “Running” to “Ready”.

ISRs

Interrupt Service Routines (ISRs) are generated asynchronously by the device drivers and protocol stacks. They are not scheduled. This includes callback functions, which are the application extension of ISRs. It is important to remember that ISRs preempt the execution of the current thread, allowing the response to occur with very low overhead. Thread execution resumes only once all ISR work has been completed. Therefore, it is important to make sure that ISRs, including callback functions, do not contain time-consuming work, or involve blocking functionalities as they will starve all other threads. Work that is time-consuming or involves blocking should be handed off to a thread using work items or other proper mechanisms., as we will see in Exercise 3.

Register an account
Already have an account? Log in
(All fields are required unless specified optional)

  • 8 or more characters
  • Upper and lower case letters
  • At least one number or special character

Forgot your password?
Enter the email associated with your account, and we will send you a link to reset your password.