Feedback
Feedback

If you are having issues with the exercises, please create a ticket on DevZone: devzone.nordicsemi.com
Click or drag files to this area to upload. You can upload up to 2 files.

Scheduler in-depth

The scheduler has a straightforward task: it picks a thread from the queue of ready thread (the ready queue) as the currently active Running thread and hands it the CPU.

Scheduler in Zephyr , nRF Connect SDK

Deciding which thread to choose as the running thread is 100% deterministic, so rules determine the importance, known as priority, of each thread. Since the scheduler is an RTOS scheduler, it does not have any regard for fairness or history of execution of the threads, meaning you as the firmware developer must decide how the threads will share a given CPU by setting the right priority for each one. This lesson covers how this is done.

Context switch

During a thread execution, the CPU registers are utilized, and RAM and ROM are accessed. The combined resources, including processor registers and the stack, form the thread’s context.

A thread follows a sequential code flow without awareness of when it will be preempted (by the scheduler) or interrupted (by an ISR). Consider a scenario where a thread gets preempted just before performing an instruction that subtracts the values stored (0x05 and 0x05) in two CPU registers (R0 and R1). While the thread is preempted, other threads will run and will very likely modify the CPU register values. When the thread is rescheduled, it is unaware of these alterations, which can lead to incorrect results if it utilized the modified CPU register values in the subtraction.

To prevent such errors, the thread must resume with a context that is identical to the context before it was preempted. The RTOS ensures this by saving the context of a thread when it is preempted, and restoring it before it resumes its execution. This process of saving the context when preempting a thread and restoring it when resuming is known as context switching.

Context switching in Zephyr , nRF Connect SDK

Notice that context switching does consume a bit of time as it involves copying data. As a firmware developer, you should minimize the number of context switches in your firmware as much as possible. Also, keep in mind that context switching happens with interrupts as well.

Thread types

A thread represents a logical unit in your firmware. The main two types of threads in the nRF Connect SDK are preemptable threads and cooperative threads. There is also a special class of cooperative threads called Meta-IRQ threads.

Preemptable threads

As covered in the nRF Connect SDK Fundamentals course, preemtable threads are the most commonly used threads for a user application. They are called preemptable because the scheduler can preempt them if a higher-priority thread exists.

Cooperative threads

Cooperative threads are created the same way as preemptable threads, except the priority passed to K_THREAD_DEFINE() is a negative number. Their main feature is that the scheduler cannot preempt them, meaning a cooperative thread will run until it stops running by willfully sleeping, waiting, calling an API that makes the thread unready, or yielding.

The main usage of cooperative threads is to enforce scheduler locking. When you implement a task as a cooperative thread, you know for sure that other threads will not be able to preempt your task, so you don’t have to worry about synchronization and locking.

Keep in mind that interrupts can still interrupt a cooperative thread. However, after the interrupt is served, execution is guaranteed to return to the cooperative thread that was interrupted. This guarantee does not exist for preemptable threads.

Cooperative threads are used in some subsystems, network stacks, and device drivers to implement mutual exclusion (scheduler locking). They can also be used in some cases of user applications with performance-critical work.

Meta-IRQ threads

Meta-IRQ threads are a special class of cooperative thread. While this type of thread is NOT intended for user applications, you need to be aware of it. A Meta-IRQ thread is intended for device drivers’ “bottom half” workload triggering with the end of a hardware ISR.

Interrupts can happen asynchronously at any time and, if they happen while a cooperative thread is running, the execution is guaranteed to return to the cooperative thread that was interrupted. However, what if the ISR has especially urgent work that needs to be done in the thread context right after the interrupt? The solution is to use a Meta-IRQ thread.

By assigning the “bottom half” of a driver (for example, the Bluetooth Low Energy stack) as a Meta-IRQ thread, the interrupt is guaranteed to trigger that thread right after.

Thread Priority

When threads are created, they are assigned an integer value to indicate their priority. The value can be either negative or non-negative, with lower numerical values taking precedence over higher values. This means that a thread with priority 4 will be given higher priority than a thread with priority 7. Similarly, a thread with priority -2 will have a higher priority than both a thread with priority 4 and a thread with priority 7.

The scheduler distinguishes between preemptible and cooperative threads based on their priority: a thread with a negative priority is classified as a cooperative thread, and a thread with a non-negative priority is classified as a preemptible thread.

The number of non-negative priorities is configurable through the Kconfig symbol CONFIG_NUM_PREEMPT_PRIORITIES and is 15 by default. The main thread has a priority of 0, while the idle thread has a default priority of 15. If the logger module is used in deferred mode, the logger thread will have a priority of 14.

Note

Since it’s dedicated to the idle thread, you should not use priority level 15 for user-defined threads. The idle thread should be the only thread at that priority. The lowest priority recommended for user-defined threads is one less than CONFIG_NUM_PREEMPT_PRIORITIES, so 14 when using the default values.

Similarly, The number of negative priorities is configurable through the Kconfig symbol CONFIG_NUM_COOP_PRIORITIES and is 16 by default. This means that priorities -1 to -16 are available for cooperative threads, with the System work queue thread implemented as a cooperate thread with a priority of -1 .

It is possible to dynamically change the initial priority of a thread after it has started. This also means that it is possible for a preemptible thread to become a cooperative thread if its priority changes from a non-negative to a negative priority, and vice versa.

Note

You can change the values of CONFIG_NUM_PREEMPT_PRIORITIES and CONFIG_NUM_COOP_PRIORITIES if you have a strong need for more or fewer levels. However, the default values set by the nRF Connect SDK are 15 and -16, respectively.

Scheduler locking and disabling interrupts

Scheduler locking is a mechanism in RTOS where the scheduler is temporarily locked or disabled to prevent context switching between different threads or processes. Scheduler locking ensures that a specific section of code or a critical region executes atomically, without disturbance from other threads. This brings us to how to do it in the nRF Connect SDK:

  • For cooperative threads, it is done automatically. A cooperative thread has a scheduler-locking mechanism built into it.
  • For regular, preemptable threads, there are two functions related to scheduler locking: k_sched_lock() to lock a thread, k_sched_unlock() to unlock a thread. The k_sched_lock() function effectively elevates the current thread to a cooperative priority, even when there are no cooperative priorities configured. The function is not a widely used mechanism for application code.

Keep in mind that scheduler locking does not prevent interrupts from interrupting your critical region. To protect a critical section of code from being preempted by the scheduler and from being interrupted by an ISR, you can use the irq_lock() and irq_unlock() functions.

Threads with equal priority

You can have several threads with the same priority level, with the exception of the priority dedicated to the idle thread (CONFIG_NUM_PREEMPT_PRIORITIES, 15 by default). But how does scheduler decide on which thread to pick when there are multiple threads with the same priority level? In addition to the default behavior, there are two other options that can be enabled if they are needed.

Default behavior: The scheduler will run the thread that was the first to have been made Ready in the ready queue.

Time slicing: Each thread of equal priority has a fixed amount of time to run. After that time has elapsed, the scheduler will preempt the current thread and allow other threads of equal priority to run. This exercise in the nRF Connect SDK Fundamentals course covers time slicing in depth. It’s important to remember that time slicing in the nRF Connect SDK only affects threads with equal priority. Time slicing is enabled with the CONFIG_TIMESLICING Kconfig symbol.

Earliest deadline first (EDF) scheduling: The firmware developer must provide an estimated deadline for each thread by calling k_thread_deadline_set(). When multiple threads exist with the same priority level, the scheduler will pick the thread with the earliest deadline (shortest period). The Kconfig symbol to enable this option is CONFIG_SCHED_DEADLINE. If you enable EDF, keep in mind that the responsibility to set the deadline for each thread is on you, not on the RTOS.

Rescheduling Points

The Zephyr kernel used by the nRF Connect SDK is, by default, a tickless kernel. It removes the periodic timer interrupts or “ticks” generated by the system tick hardware, which gives it significant power-saving advantages. In traditional Operating Systems kernels, a periodic timer interrupt, known as the system tick, is generated at a fixed interval (for example, every few milliseconds) regardless of the system’s workload. This tick serves as a reference point for various OS functions, including scheduling.

The tickless kernel approach, which is what is used in the nRF Connect SDK, recognizes that many computer systems spend a significant amount of time either idle or in low-power states, where there is no need for frequent timer interrupts. In such scenarios, the generation of unnecessary ticks can lead to increased power consumption and reduced efficiency.

Instead of relying on a fixed interval to check which thread should be the next thread to run, the scheduler relies on something called rescheduling points. A rescheduling point is an instant in time when the scheduler gets called to select the thread to run next. Any time the state of the Ready threads changes, a rescheduling point is triggered. Some examples of rescheduling points are:

  • When a thread calls k_yield(), the thread’s state is changed from Running to Ready. Some other thread might run next
  • If a thread goes to sleep by calling k_sleep(), some other thread will need to run next.
  • Unblocking a thread by giving or sending a kernel synchronization object (such as a semaphore, mutex or alert) causes the thread’s state to change from Unready to Ready.
  • When a receiving thread gets new data from other threads using data passing kernel objects, the receiving thread’s state changes from Waiting to Ready.
  • When time slicing is enabled and the thread has run continuously for the maximum time slice time allowed, the thread’s state is changed from Running to Ready. See lesson 7 – exercise 2 of the nRF Connect SDK Fundamentals course for more information.
Register an account
Already have an account? Log in
(All fields are required unless specified optional)

  • 8 or more characters
  • Upper and lower case letters
  • At least one number or special character

Forgot your password?
Enter the email associated with your account, and we will send you a link to reset your password.