top of page

Kernel & Protocol

Process Operating Protocol

Kernel

Kernel is the main component of computer Operating Systems and it connects the application software to the hardware. It is a bridge between applications and the actual data processing done at the hardware level.  Kernel provides core system functionality and services to application software running on a processor (or set of processors) presenting  an "abstraction layer" that hides from application software low-level hardware details of the resources (especially processors  and I/O devices) that application software must control to perform its function.

Task Management

To achieve concurrency in real-time application programme, the application is decomposed into small, schedulable, and sequential programme units known as “Task”. In real-time context, task is the basic unit of execution and is governed by three time-critical properties; release time, deadline and execution time. Release time refers to the point in time from which the task can be executed. Deadline is the point in time by which the task must complete. Execution time denotes the time the task takes to execute. Task is an entity maintained and run by the Operating System as an independent thread of execution.

As the most basic category of Kernel services, Task Management, allows application software developers to design their software as a number of separate "chunks" of software, called Task - each handling a distinct topic, a distinct goal, and perhaps its own real-time deadline. It means the ability to launch/ schedule tasks and assign priorities to them, as the embedded system is in operation. This service encompasses mechanism such as Task Scheduler and Dispatcher that controls the execution of application software tasks, and can make them run in a very timely and responsive fashion thus creating and maintaining task objects.

Scheduler

Process Scheduling involves three key concepts, the declaration of distinct process states, the specification of the state transition diagram and the statement of a Scheduling Policy. There are multiple tasks running in a system, the Kernel data structure used to monitor, control, and schedule execution of these tasks is referred to as the Scheduler (or Task Control Block) The Scheduler in the Kernel is responsible for executing threads in accordance with a scheduling mechanism. Priority-Controlled scheduling algorithm i.e. Preemptive Scheduling is the most popular and prevalent thread scheduling algorithm for embedded RTOSes. The Scheduler keeps record of the state of each task and selects from among them that are ready to execute and allocates the CPU to one of them. A scheduler helps to maximize CPU utilization among different tasks in a multi-tasking programme and to minimize waiting time.  Each task in a software application is assigned a priority, with higher priority values representing the need for quicker responsiveness. If pre-emption is not allowed, a client will be serviced as long as needed. It results often in intolerable long waiting times or in the blocking of important service requests for other clients. Therefore, a pre-emptive scheduling is often used. The pre-emption rule may specify either timesharing, which restricts continuous service for each client merely for the duration of a time slice, or can be priority based, interrupt servicing one client whenever a higher priority client requests service.

Scheduling Policy specifies rules for the managing multiple competing processes. Scheduling policies will be overviewed in connection with concurrent execution. By finishing the execution the process will be terminated releasing all the allocated resources.  The Scheduling Policy covers two aspects. The first one responds to the problem whether servicing of a client can be interrupted or not and, on what occasions (preemption rule). The other component states a rule how one of the competing clients will be selected for service (selection rule).

Assigning priorities

It is very important to assign correct priorities to your different threads. A couple of rules that  are helpful:

1. Use as few priority levels as possible. Only assign different priorities when preemption is absolutely necessary. This will reduce the amount of context switches done in the system. And the fewer context switches done, the more time can be spent on executing application code.

2. Make sure that all critical timing constraints are met in your application. RTOS can also provide a unique technology called preemption-threshold. This can be used to reduce context switches as well as help guarantee the execution of application threads.

 

The selection rule is typically based on some chosen parameters, such as priority, time of arrival etc.. This rule specifies an algorithm to determine a numeric value, which we can call rank, from the given parameters. During selection the ranks of all competing clients will be computed and the client with the highest rank will be scheduled for service. If more than one client have the same rank, an arbitration is needed to single out one (for example on a FIFO basis).

As far as process states are concerned, there are three basic states connected with scheduling; the ready to run state, the running state  and the wait (or blocked) state. In the ready to run state processes are able to run when a processor is allocated for them. In this state they are waiting for the processor to be executed. In the running state they are in execution on the allocated processor. In the wait state they are suspended or blocked waiting for the occurrence of some events for getting again ready-to-run.

When the scheduler selects a process for execution, its state will be Changed from the ready-to-run to the running state. The process remains in this state until one of the following three events occur. Either it can happen that in compliance with the scheduling policy the scheduler decides to cease the execution of the particular process (for instance because the allocated time slice is over), and puts this process into the ready to run queue again, changing at the same time its state accordingly. Or the process in execution issues such an instruction that causes this process to wait till an event takes place. In this case the process state will be Changed to the waiting state and its PCB will be placed into the queue of the waiting (or blocked) processes. Lastly, if the process arrives at the end of the execution, it terminates. Finally, a particular process, which is in the waiting state, can go over into the ready to run state, if the event it is waiting for has occurred. Real Operating Systems have a number of additional states, in the order of ten, introduced to cover features like swapping, different execution modes and so on.

Priority-based preemptive scheduling requires control of the processor be given to the task of the highest priority at all time. In the event that makes a higher priority task ready to run, the current task is immediately suspended and the control of the processor is given to the higher priority task. Very quick responsiveness is made possible by the "preemptive" nature of the task scheduling. "Preemptive" means that the scheduler is allowed to stop any task at any point in its execution, if it determines that another task needs to run immediately. Actually different threads have differing response requirement. For example, in an application that controls a motor, a keyboard and a display, the motor usually requires faster reaction time than the keyboard and display.  The basic rule that governs priority-based preemptive scheduling is that at every moment in time, "The Highest Priority Task that is Ready to Run, will be the Task that Must be Running." In other words, if both a low-priority task and a higher-priority task are ready to run, the scheduler will allow the higher-priority task to run first. The low-priority task will only get to run after the higher-priority task has finished with its current work. What if a low-priority task has already begun to run, and then a higher-priority task becomes ready, due to an external world trigger such as ‘a switch closing’? A priority-based preemptive scheduler will behave as follows: It will allow the low-priority task to complete the current assembly-language instruction that it is executing. [But it won’t allow it to complete an entire line of high-level language code; nor will it allow it to continue running until the next clock tick.] It will then immediately stop the execution of the low-priority task, and allow the higher-priority task to run. After the higher-priority task has finished its current work, the low-priority task will be allowed to continue running. Of course, while the mid-priority task is running, an even higher-priority task might become ready. In that case, the running task ("Mid-Priority Task") would be preempted to allow the high-priority task to run. When the high-priority task has finished its current work, the mid-priority task would be allowed to continue. And after both the high-priority task and the mid-priority task complete their work, the low-priority task would be allowed to continue running. This situation might be called "nested preemption."

 

However, only one task is in the running mode (i.e. given CPU control) at any point of the execution. In multitasking systems a mechanism is needed for switching the processing context from one task to another, this is achieved by storing the currently running task’s state in Kernel data and then loading the ready task’s state, thereby changing the context. In the process where CPU control is Changed from one task to another, context of the to-be-suspended task will be saved while context of the to-be-executed task will be retrieved. This process of saving the context of a task being suspended and restoring the context of a task being resumed is called Context Switching. Consider a system with two task threads, A and B, when task B is running the stack pointer and programme counter are set for task B’s context. Upon a context switch to task A from task B, task B’s state, which includes the contents of the registers, programme counter, and stack pointers, is saved, and that of task A is simply loaded from the task control block by the Kernel.

  • Wix Facebook page
  • Wix Twitter page
  • Wix Google+ page

Continues.......

    CALL US : COMING SOON

​© 2014   All rights reserved.

  • w-facebook
  • Twitter Clean
  • w-googleplus
bottom of page