Unit - 6
Study of commercial RTOS
Q1. Write a short note on architecture of linux?
A 1)
1. RTLinux is a hard realtime real-time operating system (RTOS) microkernel that runs the entire Linux operating system as a fully preemptive process. \
2. The hard real-time property makes it possible to control robots, data acquisition systems, manufacturing plants, and other time-sensitive instruments and machines from RTLinux applications. Despite the similar name, it is not related to the Real-Time Linux project of the Linux Foundation.
3. RTLinux was developed by Victor Yodaiken, Michael Barabanov, Cort Dougan and others at the New Mexico Institute of Mining and Technology and then as a commercial product at FSMLabs.
4. Wind River Systems acquired FSMLabs embedded technology in February 2007 and made a version available as Wind River Real-Time Core for Wind River Linux. As of August 2011, Wind River has discontinued the Wind River Real-Time Core product line, effectively ending commercial support for the RTLinux product.
Q2. Explain the implementation and objective of linux?
A 2)
Implementation
1. RTLinux provides the capability of running special real-time tasks and interrupt handlers on the same machine as standard Linux. These tasks and handlers execute when they need to execute no matter what Linux is doing.
2. The worst case time between the moment a hardware interrupt is detected by the processor and the moment an interrupt handler starts to execute is under 15 microseconds on RTLinux running on a generic x86 (circa 2000).
3. A RTLinux periodic task runs within 35 microseconds of its scheduled time on the same hardware. These times are hardware limited, and as hardware improves RTLinux will also improve.
4. Standard Linux has excellent average performance and can even provide millisecond level scheduling precision for tasks using the POSIX soft real-time capabilities. Standard Linux is not, however, designed to provide sub-millisecond precision and reliable timing guarantees.
5. RTLinux was based on a lightweight virtual machine where the Linux "guest" was given a virtualized interrupt controller and timer, and all other hardware access was direct.
Objective
1. The key RTLinux design objective is that the system should be transparent, modular, and extensible. Transparency means that there are no unopenable black boxes and the cost of any operation should be determinable.
2. Modularity means that it is possible to omit functionality and the expense of that functionality if it is not needed. And extensibility means that programmers should be able to add modules and tailor the system to their requirements.
3. The base RTLinux system supports high speed interrupt handling and no more. It has simple priority scheduler that can be easily replaced by schedulers more suited to the needs of some specific application
Q3. Explain the functionality of linux in short?
A 3)
Functionality
1. The majority of RTLinux functionality is in a collection of loadable kernel modules that provide optional services and levels of abstraction. These modules include:
1.1 rtl sched - a priority scheduler that supports both a "lite POSIX" interface described below and the original V1 RTLinux API.
1.2 rtl time - which controls the processor clocks and exports an abstract interface for connecting handlers to clocks.
1.3 rtl posixio - supports POSIX style read/write/open interface to device drivers.
1.4 rtl fifo - connects RT tasks and interrupt handlers to Linux processes through a device layer so that Linux processes can read/write to RT components.
1.5 semaphore - a contributed package by Jerry Epplin which gives RT tasks blocking semaphores.
1.6 POSIX mutex support is planned to be available in the next minor version update of RTLinux.
1.7 mbuff is a contributed package written by Tomasz Motylewski for providing shared memory between RT components and Linux processes.
Q4. What is scheduling explain in short?
A 4)
Schedulers
1. As we know, the illusion that all the tasks are running concurrently is achieved by allowing each to have a share of the processor time. This is the core functionality of a kernel.
2. The way that time is allocated between tasks is termed “scheduling”. The scheduler is the software that determines which task should be run next.
3. The logic of the scheduler and the mechanism that determines when it should be run is the scheduling algorithm.
4. We will look at a number of scheduling algorithms in this section. Task scheduling is actually a vast subject, with many whole books devoted to it.
Q5. State and explain types of schedulers in detail?
A 5)
1. Run to Completion (RTC) Scheduler
RTC scheduling is very simplistic and uses minimal resources. It is, therefore, an ideal choice, if the application’s needs are fulfilled. Here is the timeline for a system using RTC scheduling:
The scheduler simply calls the top level function of each task in turn. That task has control of the CPU (interrupts aside) until the top level function executes a return statement. If the RTOS supports task suspension, then any tasks that are currently suspended are not run. This is a topic discussed below; see Task 6 .Suspend.
The big advantages of an RTC scheduler, aside from its simplicity, are the need for just a single stack and the portability of the code (as no assembly language is generally required).
2. Round Robin (RR) Scheduler
1. An RR scheduler is similar to RTC, but more flexible and, hence, more complex. In the same way, each task is run in turn (allowing for task suspension), thus:
2. However, with the RR scheduler, the task does not need to execute a return in the top level function. It can relinquish the CPU at any time by making a call to the RTOS. This call results in the kernel saving the context (all the registers – including stack pointer and program counter) and loading the context of the next task to be run. With some RTOSes, the processor may be relinquished – and the task suspended – pending the availability of a kernel resource.
3. Time Slice (TS) Scheduler
1. A TS scheduler is the next step in complexity from RR. Time is divided into “slots”, with each task being allowed to execute for the duration of its slot, thus:
2. In addition to being able to relinquish the CPU voluntarily, a task is preempted by a scheduler call made from a clock tick interrupt service routine. The idea of simply allocating each task a fixed time slice is very appealing – for applications where it fits the requirements – as it is easy to understand and very predictable.
4. Priority Scheduler
1. Most RTOSes support Priority scheduling. The idea is simple: each task is allocated a priority and, at any particular time, whichever task has the highest priority and is “ready” is allocated the CPU, thus:
2. The scheduler is run when any “event” occurs (e.g. interrupt or certain kernel service calls) that may cause a higher priority task being made “ready”. There are broadly three circumstances that might result in the scheduler being run
3. The task suspends itself; clearly the scheduler is required to determine which task to run next.
The task readies another task (by means of an API call) of higher priority.
4. An interrupt service routine (ISR) readies another task of higher priority. This could be an input/output device ISR or it may be the result of the expiration of a timer (which are supported my many RTOSes – we will look at them in detail in a future article).
5. Composite Scheduler
1. We have looked at RTC, RR, TS and Priority schedulers, but many commercial
RTOS products offer more sophisticated schedulers, which have characteristics of more than one of these algorithms.
2. For example, an RTOS may support multiple tasks at each priority level and then use time slicing to divide time between multiple ready tasks at the highest level.
Q6. Explain what memory management in detail?
A 6)
1. Memory management is the process by which a computer control system allocates a limited amount of physical memory among its various processes (or tasks) in a way that optimizes performance.
2. Actually, each process has its own private address space, initially divided into three logical segments: text, data, and stack. The text segment is read-only and contains the machine instructions of a program, the data and stack segments are both readable and writable.
3. The data segment contains the initialized and non-initialized data portions of a program, whereas the stack segment holds the application’s run-time stack. On most machines, this is extended automatically by the kernel as the process executes.
4. This is done by making a system call, but change to the size of a text segment only happens when its contents are overlaid with data from the file system, or when debugging takes place. The initial contents of the segments of a child process are duplicates of the segments of its parent.
5. Memory management is one of the most important subsystems of any operating system for computer control systems, and is even more critical in a RTOS than in standard operating systems.
6. Firstly, the speed of memory allocation is important in a RTOS. A standard memory allocation scheme scans a linked list of indeterminate length to find a free memory block; however, memory allocation has to occur in a fixed time in a RTOS
Q7. Write a short note on Virtual memory?
A 7)
1. The operating system uses virtual memory to manage the memory requirements of its processes by combining physical memory with secondary memory (swap space) on a disk, usually located on a hardware disk drive.
2. Diskless systems use a page server to maintain their swap areas on the local disk (extended memory).
3. The translation from virtual to physical addresses is implemented by a memory management unit (MMU), which may be either a module of the CPU, or an auxiliary, closely coupled chip.
Q8. Explain the concepts of paging swapping and segmentation?
A 8)
Paging
1. Almost all implementations of virtual memory divide the virtual address space of an application program into pages; a page is a block of contiguous virtual memory addresses.
2. Here, the low-order bits of the binary representation of the virtual address are preserved, and used directly as the low-order bits of the actual physical address; the high-order bits are treated as a key to one or more address translation tables, which provide the high-order bits with the actual physical address.
3. For this reason, a range of consecutive addresses in the virtual address space, whose size is a power of two, will be translated to a corresponding range of consecutive physical addresses.
4. The memory referenced by such a range is called a page. The page size is typically in the range 512 8192 bytes (with 4 kB currently being very common), though 4 MB or even larger may be used for special purposes.
Swapping
1. Swap space is a portion of hard disk used for virtual memory that is usually a dedicated partition (i.e., a logically independent section of a hard disk drive), created during the installation of the operating system.
2. Such a partition is also referred to as a swap partition. However, swap space can also bea special file.
3. Although it is generally preferable to use a swap partition rather than a file, sometimes it is not practical to add or expand a partition when the amount of RAM is being increased. In such case, a new swap file can be created with a system call to mark a swap space.
Segmentation
1. Some operating systems do not use paging to implement virtual memory, but use segmentation instead. For an application process, segmentation divides its virtual address space into variable-length segments, so a virtual address consists of a segment number and an offset within the segment.
2. Memory is always physically addressed with a single number (called absolute or linear address).
3. To obtain it, the microprocessor looks up the segment number in a table to find a segment descriptor. This contains a flag indicating whether the segment is present in main memory and, if so, the address of its starting point (segment’s base address) and its length.
4. It checks whether the offset within the segment is less than the length of the segment and, if not, generates an interrupt. If a segment is not present in main memory, a hardware interrupt is raised to the operating sys- tem, which may try to read the segment into main memory, or to swap it in.
Q9. State the Memory allocation and deallocation that can be categorized according to need?
A 9)
(1) Static memory allocation
1. Static memory allocation refers to the process of allocating memory at compile-time, before execution.
One way to use this technique involves a program module (e.g., function or subroutine) declaring static data locally, such that these data are inaccessible to other modules unless references to them are passed as parameters or returned.
2. A single copy of this static data is retained and is accessible through many calls to the function in which it is declared. Static memory allocation therefore has the advantage of modularizing data within a program so that it can be used throughout run-time.
(2) Dynamic memory allocation
1. Dynamic memory allocation is the allocation of memory storage for use during the run-time of a program, and is a way of distributing ownership of limited memory resources among many pieces of data and code.
2. A dynamically allocated object remains allocated until it is deallocated explicitly, either by the programmer or by a garbage collector; this is notably different from automatic and static memory allocation. It is said that such an object has dynamic lifetime.
3. Memory pools allow dynamic memory allocation comparable to malloc, or the operator “new” in Cþþ. As those implementations suffer from fragmentation because of variable block sizes, it can be impossible to use them in a real-time system due to performance problems.
Q10. What is Task synchronization and explain 3 synchronizing messaging techniques in detail?
A 10)
1. Synchronization and messaging provides the necessary communication between tasks in one system to tasks in another system.
2. The event flag is used to synchronize internal activities while message queues and mailboxes are used to send text messages between systems.
3. Common data areas utilize semaphores. Below are top 3 messaging and synchronization techniques.
3.1 Semaphores
1. These are independent kernel objects that are designed to offer flagging mechanisms required to control access to resources.
2. There are two types of semaphores; counting semaphores that feature a random number of states and binary semaphores that feature two states.
3. Binary semaphores can be classified as counting semaphores that have a count limit of 1. Tasks attempt to obtain semaphores in order to access their required resources.
3.2 Mailboxes
1. Mailboxes also referred to as exchanges are independent kernel objects that facilitate transfer of messages by tasks.
2. The message sized may be fixed or determined by the implementation. Pointers to complex data are sent through mailboxes.
3. As such, some kernels make use of mailboxes in order to store data in a regular variable where kernel can access it with ease. If a task sends messages to a full mailbox, it receives an error message.
4. To prevent this, RTOS supports blocking calls to ensure that messages are suspended until the mailbox is read by another task to create room for more messages.
5. Once a task reads from a mailbox, it renders it empty such that, if a task tries to read from the mailbox, it gets an error message and is suspended until the mailbox is filled
3.3 Queues
1. Queues are also independent kernel objects whose aim to provide means for tasks to transfer messages.
2. They are deemed more complex but flexible as compared to mailboxes, their message size may be fixed, pointer oriented and based on the implementation. A task may send to a queue until the queue is full.
3. Queue depths are mainly user specified during creation or configuration. RTOS supports block calling meaning that, if a queue is full, a task is put on hold until the queue is read by other tasks in the same order as the message were sent; first in, first out.
4. If tasks try to read from an empty queue, it receives an error message until the queue is filled by another task.