Pages

Search Here

Real-Time Operating Systems

Real-Time System Concepts




Real-time System
  • A real-time system is one whose logical correctness is based on both the correctness of the outputs and their timeliness.
  • Real-time means computing with a deadline.
  • In real-time computing a late answer is a wrong answer.
  • Real-time doesn’t mean fast. It means being fast enough to meet the deadlines of the application.


Hard Real-time (HRT) Systems




  • Hard real-time (HRT) systems are a subclass of RT systems in which missing of a deadline has catastrophic results for the system
  • Cost of missing deadlines is infinitely high
  • Late results are useless
  • Examples:
    • Automobile air bag deployment


Soft Real-Time (SRT) Systems




  •  Soft real-time systems are a subclass of RT systems in which deadlines may be missed and can be recovered from. The reduction in the quality is acceptable.
  •  Example:
    •  Vending machine
    •  Real-time video
    •  Telecommunication


Real-Time Design Approaches




There are two primary techniques used in real-time designs




  •  Super-loop (Grand-loop) technique
    •  Employed in dedicated applications
    •  Control code is an infinite loop that may poll several devices
    •  Used when the main processing loop combined with worst case interrupt latencies is faster than the system deadline.
    •  Suitable development approach for a small programming team.
    •  Extremely difficult to guarantee that the deadlines will be met
    •  CPU is 100% utilized even when the system is idle.
  • Multitasking
    • Also called pseudo parallelism
    • Several threads of control running
    •  Each threads of control share the same CPU and communicate with each other
    •  CPU is shared between threads of control using a scheduling policy implemented by the kernel.
    •  Scheduling policies can be either cooperative, priority-based or round-robin






Real-Time Operating Systems






What is an RTOS?




  •  A Real- Time Operating System is a software that allows a program to:
    •  React in a deterministic way to external events
    •  Communicate with peripherals and other tasks
    •  Share the CPU and resources between competing threads of execution in a predictable way
  • RTOS is a building block for an RT system
  • It has a bounded behavior under all system load scenarios


Multitasking Revisited
  •  Multitasking is the process of scheduling and switching the CPU between several tasks
  •  Maximizes the utilization of the CPU
  •  Facilitates modular construction of applications
  •  Simplifies the design of application programs


Task


  • A task (thread) is a simple program that thinks it has the CPU all to itself.
  • A Real-Time application consists of several tasks executing concurrently.
  • Each task is assigned a priority, its own set of CPU registers (context), and its own stack area.




Kernel


  • Definition
    • Kernel is the part of the multi-tasking system responsible for the management of tasks and communication between tasks.
  • Services
    • Scheduling
    • Inter task communication services (Semaphore management, mail boxes, queues, time delays, etc.)


Scheduler


  • Scheduler is the part of the kernel responsible for determining which task will run next
  • Most real-time kernels are priority based
  • Each task is assigned a priority based on its importance
  • The priority of each task is application specific
  • Control is always given to the highest priority task ready to run.


Context Switch


  • Occurs when a Multi-Tasking kernel decides to run a different task
  • Steps involved in a context switch
    1. Save current task’s context(CPU registers) in the current task’s context storage area (it’s stack)
    2. Restore the new task’s context from it’s storage area (it’s stack).
    3. Resume execution of the new task’s code.


Multiple Tasks





Reentrancy


  • A reentrant function can be interrupted at any time and resumed at a later time without loss of data.
  • It either uses local variables or protects data when global variables are used.
  • It can be used by more than one task without fear of data corruption.
Critical Section & Mutual Exclusion


  • A Critical Section is the code that needs to be executed indivisibly.
  • Mutual Exclusion is used to ensure the exclusive access to a shared resource without data corruption.
  • Common Methods are
    • Disabling interrupts
    • Disabling scheduling
    • Using Mutexes(Semaphore)


Non-Preemptive Kernel


  • Require that each task does something to explicitly give up the control of the CPU
  • Also called cooperative multitasking
  • ISR always returns to the interrupted task
  • Advantages
    • Low interrupt latency
    • It can use Non-Reentrant functions
  • Disadvantages
    • Responsiveness is very law
  • Very few real-time kernels are non-preemptive






Preemptive Kernel


  • It is used when system responsiveness is important.
  • High priority task ready to run is always given control of the CPU
  • Most real-time kernels are preemptive.
  • Application code using a preemptive kernel should not use non-reentrant functions or an appropriate mutual exclusion method should be applied to prevent data corruptio

 Task States


Synchronization


Commonly used synchronization primitives are
  • Mutexes
    • The operations are
      • MutexLock()
      • MutexUnlock()
  • Semaphores
    • The operations are
      • Signal()
      • Wait()
Semaphore


  • Term borrowed from railways
  • It is a flag that is set or reset by rtos on request from a thread
  • On a set request when it is already set, the task is blocked !
  • On the later reset request, the blocked task is resumed…
  • Question: Why can’t we use a global flag for the same purpose ?


Reentrancy and semaphores


  • Routines using global resources can be made reentrant by setting a semaphore while it is using them
  • The segment of code where the semaphore is set is called critical section
  • The discipline of setting the semaphore during critical section is self-imposed….
  • And hence is an area of potential danger


Types of semaphores


  • BINARY – used as a flag
  • COUNTING – can take non-negative integer values
  • RESOURCE - only the owner can release
  • MUTEX - automatically handles priority inversion
Calling convetions


  • Raise and lower
  • Give and take
  • Pend and post
  • P and v
  • Wait and signal
Problems


  • Forgetting to initialise
  • Forgetting to take
  • Taking wrong semaphore
  • Forgetting to release
  • Holding on too long
  • Priority inversion
  • Deadly embrace
  • Avoid semaphore, if you can


Inter-process Communication


Common Inter-process Communication methods are


  • Events
  • Mailboxes
  • Message queues
  • Pipes
  • Timer functions
  • Memory management
  • Interrupt routines






Events

  • One bit flag used for signalling
  • Is set or reset by an event-flag;
  • Events are grouped together
  • Multiple tasks can wait on an event
  • Or a combination of them…
  • May be considered as extension of semaphore concept








Mailboxes


  • Is a software abstraction where a task can keep a mail meant for another task
  • Task can wait for a mail to arrive…
  • In case the mail box is empty, os will block it
  • And will wake it up when a mail arrives…
  • Putting the mail is accomplished by placing a pointer at mail box
  • If mail box is full, the writing task is blocked
  • It is woken up when another task reads a mail thereby making space at mailbox






Queue

  • Similar to mail-box, but can have many entries
  • Writing task add entries to head of queue, reading tasks decodes entries from its tail
  • In case the queue is empty, os will block reading task
  • And will wake it up when an entry arrives…
  • If queue is full, writing task gets an error condition, or may get blocked
  • Normally, a queue entry is a pointer to the queue item

Pipe

  • Similar to unix pipe concept
  • Multiple tasks can write to pipe and these go as a concatenated stream of byte
  • Unlike, mailbox and queue, normally actuel entries are written, not its pointers…
  • Os ensures that each write is an atomic operation
  • Reading is from other end of pipe, the reading task is blocked in case there is not enough bytes to read
  • And unblocked, when they arrive…

What to use ?


  • Semaphore is fastest and simplest
  • Use event-flags, if you want to wait for multiple conditions
  • Use a mailbox if you want to send a pointer to an information bearing structure to a task
  • Or a queue when you have many such structures…
  • And a pipe in case you do not want to have shared data problem on the memory structures that you keep
  • There is no hard and fast rule about what to use when…





Timer functions


  • Delay or sleep
  • Is normally based on system ticks
  • Conditional sleep gets embedded with other functions like waiting for mail to arrive..
  • Another facility is to provide a timer abstraction that calls up a user defined function periodically…
  • Low priority pollong tasks like man-machine interface is best left to be handled by such a task…



Memory management


  • Malloc and free is an option…
  • Too fine grain for a real-time system
  • Easier to manage is fixed sized pools…
  • Rtos has to be told where and how much memory is available
  • Essentially, you have to manage your memory
  • Rtos will only help you do it…

Interrupt routines


  • Normally interrupt routines becomes part of rtos kernel
  • For this to work, the rtos need to be re-compiled whenever an interrupt routine is modified or new one added  
  • Alternately, an rtos can allow user specified interrupt routines…
  • More complex
  • And more difficult to debug
  • But relieves the botheration of re-compiling the kernel  

Interrupt Latency

  • Interrupt latency is the time between the reception interrupt and execution of the first instruction in the ISR.
  • Interrupt latency is a metric of system response to an external event.
  • The longer the interrupts are disabled, the higher the interrupt latency.
  • Interrupt Latency = Maximum amount of time interrupts are disabled + Time to start executing the first instruction in the ISR

Interrupt Response and Recovery

  • Interrupt response is defined as the time between the reception of the interrupt and the start of the user code that handles the interrupt.
  • Interrupt Response = Interrupt Latency + Time to save the CPU context + Execution time of the kernel ISR entry function
  • Interrupt recovery is defined as the time required for the processor to return to the interrupted code.
  • Interrupt recovery = Time to determine if a higher priority task is ready + Time to restore the CPU’s context of the highest priority task + time to execute the return from interrupt instruction





Interrupt Latency, Response and Recovery















 Application Programs

  • Application program consists of three components
  • Tasks
  • I/O Device Drivers
  • Interrupt Service Routines











Decomposition Criteria


  • Given a task with actions A and B if any of the following criteria are satisfied actions A and B should be in separate tasks
  • Time -actions A and B are dependent on cyclical conditions that have different frequencies or phases.
  • Asynchrony - actions A and B are dependent on conditions that have no temporal relationships to each other.
  • Priority - actions A and B are dependent on conditions that require a different priority of attention.
  • Clarity/Maintainability- actions A B are either functionally or logically removed from each other



Scheduling Algorithms





  • Prioritized Preemptive
    • Highest Priority task ready to run will is always given control of the CPU. When a task makes a higher priority task ready to run, the current task is preempted and the higher priority task is immediately given control of the CPU.
  • Round Robin Scheduling
    • When two or more tasks have the same priority, the kernel allows one task to run for a predetermined amount of time, called a quantum, then selects another task.
Task Priorities


  • A priority is assigned to each task. The more important the task, the higher priority given to it.
  • Static Priorities: Task priorities are said to be static when the priority of each task does not change during the application’s execution time.
  • Dynamic Priorities: Task priorities are said to be dynamic if the priority of a task can be changed during the application’s execution time.


Priority Inversion


  • Priority inversion is a problem in real-time systems and occurs mostly when you use a real-time kernel.
  • A priority inversion occurs when a low priority task causes execution of a higher-priority task to be delayed
  • If the task holding the lock has a lower priority than the task attempting to acquire the conflicting lock, then the lower priority task is delaying a higher priority one and a priority inversion has occurred.
  • Unbounded priority Inversion occurs when the lower priority task is preempted by a medium priority task














Priority Inheritance


  • Priority inheritance promises a solution to unbounded priority inversion.
  • The basic idea of priority inheritance is to provide dynamic calculation of the ceiling priority.
  • When a task blocks on a resource owned by a lower priority task, the lower priority task inherits the priority of the blocking task and continues.




Advantages of RT Kernels

  • RTOS allows real-time applications to be designed and expanded easily
  • Simplifies the design process of RT Systems
  • Time Critical events are handled as quickly and and efficiently as possible
  • Provides valuable services such as semaphores, mailboxes, queues, time delays, timeouts etc.



Disadvantages of RT Kernels


  • Extra cost for the kernel
  • More RAM/ROM
  • 2 to 4 percent additional CPU overhead





Overview of modern RTOSs





  1. MicroC/OS-II
  2. QNX
  3. pSOS
  4. RT-Linux






MicroC/OS-II

  • uC/OS-II is a portable, ROMable, scalable, preemptive, multitasking kernel for real-time systems.
  • Execution time of all uC/OS-II functions and services are very deterministic
  • uC/OS-II provides a number of services such as mailboxes, queues, semaphores, fixed-sized memory partition and time-related functions.


Task Handling Method







Memory Management

  • uC/OS-II supports only fixed block memory management
  • Application can request fixed-block memory blocks from a partition made of contiguous memory area.
  • Allocation and deallocation of these memory blocks are done in constant time and is deterministic.

Inter-Process Communication

  • MicroC/OS-II supports following IPC primitives
  • Mail Boxes
  • Message Queues
  • Semaphores
  • Event Flags

Interrupt Handling Method

  • Interrupts can suspend the execution of a task.
  • If a higher priority task is awakened as a result of the interrupt, the highest priority task will run as soon as all nested interrupts complete.
  • Interrupts can be nested up to 255 levels deep








QNX

  • QNX is a POSIX complaint operating system for embedded and real-time applications
  • QNX has a microkernel architecture
  • QNX consists of the small Neutrino microkernel managing a group of cooperating processes
  • QNX is a message based operating system
  • QNX uses process-thread model for task handling


QNX Architecture.












Neutrino Microkernel

  • Neutrino is a microkernel implementation of core POSIX features along with fundamental QNX message-passing services.
  • Neutrino provides the following services
  • Threads, Message passing, signals, clocks, timers, interrupt handlers, semaphores, mutual exclusion locks, condition variables, barriers.
  • Neutrino is fully preemptible, even while passing message between processes.
  • QNX has Symmetric Multiprocessor (SMP) Support






Task handling model









Memory Management

  • Every process has its own virtual memory space
  • Virtual memory is supported by the paging mechanism of the processor
  • Virtual memory protects processes from each other which enhances robustness of the system
  • Swapping is supported but can be disabled

Inter-Process Communication

  • QNX supports following IPC primitives
  • Semaphores
  • Mutexes
  • Condition-variables
  • Barriers
  • shared-memory
  • FIFO

Interrupt Handling Method

  • Interrupt redirector in the microkernel handles interrupts in their initial stage
  • User-level threads (usually resource managers) can attach an ISR to a hardware interrupt number.
  • Supports Interrupt sharing by attaching more than one ISRs to a hardware interrupt number.
  • ISRs can send pulses and signals to threads which are waiting the interrupt.






pSOS


  • pSOS is a modular high performance real-time operating system designed specifically for embedded microprocessors.
  • pSOS is built around pSOS+ multitasking kernel and a collection of companion software components.
  • pSOS System Environment









Task handling model

 









Memory Management


  • pSOS has a flat memory space
  • MMU is supported but it is optional
  • No virtual memory/swapping support
  • No memory protection for tasks
  • Provides two set of memory handling functions
  • Regions which are non fixed sized memory pools
  • Partitions which are fixed sized memory pools


Inter-Process Communication

  • pSOS supports following IPC primitives
  • Message Queues
  • Events
  • Semaphores
  • Asynchronous Signals








Interrupt Handling Method

  • Interrupt Handling is nested and prioritized
  • Interrupt Handler uses kernel or interrupt stack (depending upon the target)
  • Most synchronization and communication objects can be used for the communication between ISR and other tasks.



Interrupt Latencies

















RT-Linux

  • RT-Linux is an operating system in which a small real-time kernel coexists with the Posix-Like Linux kernel.
  • A simple real-time executive (RT Kernel) runs a non-real-time kernel (Linux) as its lowest priority task.
  • Real-Time Linux is a version of linux that provides hard real time capability.
  • It extends standard UNIX programming environment to real-time problems






RT-Linux Layer Architecture














Task Handling Method












Memory Management

  • RT-Linux doesn't support memory protection
  • Virtual memory/swapping supported
  • No dynamic memory allocation is available



Inter-Process Communication

  • RT-Linux supports following IPC primitives
    • Mutexes
    • Shared Memory
    • FIFOs


NB:-  To view image in original size please open each one in a seperate window/tab


No comments:

Post a Comment