UNIT 3
Process Scheduling
CPU Scheduling which is a process of determining that the process will own CPU for execution while another process is that on hold. The main task of CPU scheduling is that it will make sure that whenever the CPU will remain idle, the OS at least select one of that of the processes available in the ready queue for that of the execution. The selection process will be carried out by that of the CPU scheduler. It selects one of the processes in that of the memory that are ready for execution.
There are two types of Scheduling methods:
In Pre-emptive Scheduling, the tasks that are mostly assigned with that of their priorities. Sometimes it is very important to run that of a task with a higher priority before that of the lower priority task, even if the lower priority task is still running. The lower priority task that holds for some time and then resumes when the higher priority task finishes that of its execution.
In this type of the scheduling method, the CPU that has been allocated to that of a specific process. The process that keeps that of the CPU busy will release the CPU either by the switching context or the terminating. It is the only method which can be used for that of the various hardware platforms. That's because it doesn't need the special hardware (for example, a timer) such as pre-emptive scheduling.
When scheduling is Pre-emptive or Non-Pre-emptive?
To determine that if the scheduling is pre-emptive or non-pre-emptive, consider these of the four parameters:
- A process switches from the running to the waiting state.
- Specific process switches from the running state to the ready state.
- Specific process switches from the waiting state to the ready state.
- Process finished its execution and terminated.
Only conditions 1 and 4 apply, the scheduling is called non- pre-emptive.
All other scheduling are pre-emptive.
Important CPU scheduling Terminologies
- Burst Time/Execution Time: It is a time that required by that of the process to complete execution. It is also known as running time.
- Arrival Time: when a process enters in that of a ready state
- Finish Time: when process complete and then exit from a system
- Multiprogramming: A number of programs that can be present in the memory at that of the same time.
- Jobs: It is a type of the program without any kind of user interaction.
- User: It is a kind of program having user interaction.
- Process: It is the reference that is used for both job and user.
- CPU/IO burst cycle: Characterizes process execution, which will alternate between CPU and I/O activity. CPU times are usually that of the shorter than the time of I/O.
A CPU scheduling algorithm which tries to maximize and minimize the following:
CPU utilization: CPU utilization is that of the main task in which the operating system needs to make sure that CPU remains as busy as possible. It can range from 0 to 100 percent. However, for the RTOS, it can be range from 40 percent for low-level and 90 percent for the high-level system.
Throughput: The number of processes that finish their execution per unit time is known Throughput. So, when the CPU is busy executing the process, at that time, work is being done, and the work completed per unit time is called Throughput.
Waiting time: Waiting time is an amount that specific process needs to wait in the ready queue.
Response time: It is an amount to time in which the request was submitted until the first response is produced.
Turnaround Time: Turnaround time is an amount of time to execute a specific process. It is the calculation of the total time spent waiting to get into the memory, waiting in the queue and, executing on the CPU. The period between the time of process submission to the completion time is the turnaround time.
Timer interruption is a method that is closely related to pre-emption. When a certain process gets the CPU allocation, a timer may be set to a specified interval. Both timer interruption and pre-emption force a process to return the CPU before its CPU burst is complete.
Most of the multi-programmed operating system uses some form of a timer to prevent a process from tying up the system forever.
It is a module that provides control of the CPU to the process. The Dispatcher should be fast so that it can run on every context switch. Dispatch latency is the amount of time needed by the CPU scheduler to stop one process and start another.
Functions performed by Dispatcher:
- Context Switching
- Switching to user mode
- Moving to the correct location in the newly loaded program.
Types of CPU scheduling Algorithm
There are mainly six types of process scheduling algorithms
- First Come First Serve (FCFS)
- Shortest-Job-First (SJF) Scheduling
- Shortest Remaining Time
- Priority Scheduling
- Round Robin Scheduling
- Multilevel Queue Scheduling
First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation first. This scheduling method can be managed with a FIFO queue.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the queue. So, when CPU becomes free, it should be assigned to the process at the beginning of the queue.
Characteristics of FCFS method:
- It offers non-pre-emptive and pre-emptive scheduling algorithm.
- Jobs are always executed on a first-come, first-serve basis
- It is easy to implement and use.
- However, this method is poor in performance, and the general wait time is quite high.
The full form of SRT is Shortest remaining time. It is also known as SJF pre-emptive scheduling. In this method, the process will be allocated to the task, which is closest to its completion. This method prevents a newer ready state process from holding the completion of an older process.
Characteristics of SRT scheduling method:
- This method is mostly applied in batch environments where short jobs are required to be given preference.
- This is not an ideal method to implement it in a shared system where the required CPU time is unknown.
- Associate with each process as the length of its next CPU burst. So that operating system uses these lengths, which helps to schedule the process with the shortest possible time.
Priority scheduling is a method of scheduling processes based on priority. In this method, the scheduler selects the tasks to work as per the priority.
Priority scheduling also helps OS to involve priority assignments. The processes with higher priority should be carried out first, whereas jobs with equal priorities are carried out on a round-robin or FCFS basis. Priority can be decided based on memory requirements, time requirements, etc.
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes from the round-robin principle, where each person gets an equal share of something in turn. It is mostly used for scheduling algorithms in multitasking. This algorithm method helps for starvation free execution of processes.
Characteristics of Round-Robin Scheduling
- Round robin is a hybrid model which is clock-driven
- Time slice should be minimum, which is assigned for a specific task to be processed. However, it may vary for different processes.
- It is a real time system which responds to the event within a specific time limit.
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the shortest execution time should be selected for execution next. This scheduling method can be pre-emptive or non-pre-emptive. It significantly reduces the average waiting time for other processes awaiting execution.
Characteristics of SJF Scheduling
- It is associated with each job as a unit of time to complete.
- In this method, when the CPU is available, the next process or job with the shortest completion time will be executed first.
- It is Implemented with non-pre-emptive policy.
- This algorithm method is useful for batch-type processing, where waiting for jobs to complete is not critical.
- It improves job output by offering shorter jobs, which should be executed first, which mostly have a shorter turnaround time.
Multiple-Level Queues Scheduling
This algorithm separates the ready queue into various separate queues. In this method, processes are assigned to a queue based on a specific property of the process, like the process priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as it needs to use other types of algorithms in order to schedule the jobs.
Characteristic of Multiple-Level Queues Scheduling:
- Multiple queues should be maintained for processes with some characteristics.
- Every queue may have its separate scheduling algorithms.
- Priorities are given for each queue.
The Purpose of a Scheduling algorithm
Here are the reasons for using a scheduling algorithm:
- The CPU uses scheduling to improve its efficiency.
- It helps you to allocate resources among competing processes.
- The maximum utilization of CPU can be obtained with multi-programming.
- The processes which are to be executed are in ready queue.
- CPU scheduling is a process of determining which process will own CPU for execution while another process is on hold.
- In Pre-emptive Scheduling, the tasks are mostly assigned with their priorities.
- In the Non-pre-emptive scheduling method, the CPU has been allocated to a specific process.
- Burst time is a time required for the process to complete execution. It is also called running time.
- CPU utilization is the main task in which the operating system needs to make sure that CPU remains as busy as possible
- The number of processes that finish their execution per unit time is known Throughput.
- Waiting time is an amount that specific process needs to wait in the ready queue.
- It is an amount to time in which the request was submitted until the first response is produced.
- Turnaround time is an amount of time to execute a specific process.
- Timer interruption is a method that is closely related to preemption,
- A dispatcher is a module that provides control of the CPU to the process.
- Six types of process scheduling algorithms are:
- First Come First Serve (FCFS), 2) Shortest-Job-First (SJF) Scheduling 3) Shortest Remaining Time 4) Priority Scheduling 5) Round Robin Scheduling 6) Multilevel Queue Scheduling
- In the First Come First Serve method, the process which requests the CPU gets the CPU allocation first.
- In the Shortest Remaining time, the process will be allocated to the task, which is closest to its completion.
- In, Priority Scheduling the scheduler selects the tasks to work as per the priority.
- In, this Round robin scheduling works on principle, where each person gets an equal share of something in turn
- In Shortest job first the shortest execution time should be selected for execution next
- In Multilevel scheduling, method separates the ready queue into various separate queues. In this method, processes are assigned to a queue based on a specific property
- The CPU uses scheduling to improve its efficiency.
Key takeaway
CPU Scheduling is a process of determining which process will own CPU for execution while another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution. The selection process will be carried out by the CPU scheduler. It selects one of the processes in memory that are ready for execution.
What is Pre-emptive Scheduling?
Pre-emptive Scheduling is a scheduling method where the tasks are mostly assigned with their priorities. Sometimes it is important to run a task with a higher priority before another lower priority task, even if the lower priority task is still running.
At that time, the lower priority task holds for some time and resumes when the higher priority task finishes its execution.
What is Non- Pre-emptive Scheduling?
In this type of scheduling method, the CPU has been allocated to a specific process. The process that keeps the CPU busy will release the CPU either by switching context or terminating.
It is the only method that can be used for various hardware platforms. That's because it doesn't need specialized hardware (for example, a timer) like pre-emptive Scheduling.
Non-Pre-emptive Scheduling occurs when a process voluntarily enters the wait state or terminates.
Difference Between Pre-emptive and Non-Pre-emptive Scheduling in OS
Here, are Pre-emptive and Non-Pre-emptive Scheduling in OS
Pre-emptive Scheduling | Non-pre-emptive Scheduling |
A processor can be pre-empted to execute the different processes in the middle of any current process execution. | Once the processor starts its execution, it must finish it before executing the other. It can't be paused in the middle. |
CPU utilization is more efficient compared to Non-Pre-emptive Scheduling. | CPU utilization is less efficient compared to pre-emptive Scheduling. |
Waiting and response time of pre-emptive Scheduling is less. | Waiting and response time of the non-pre-emptive Scheduling method is higher |
Pre-emptive Scheduling is prioritized. The highest priority process is a process that is currently utilized. | When any process enters the state of running, the state of that process is never deleted from the scheduler until it finishes its job. |
Pre-emptive Scheduling is flexible | Non-pre-emptive Scheduling is rigid. |
Examples: - Shortest Remaining Time First, Round Robin, etc. | Examples: First Come First Serve, Shortest Job First, Priority Scheduling, etc. |
Pre-emptive Scheduling algorithm can be pre-empted that is the process can be Scheduled | In non-pre-emptive scheduling process cannot be Scheduled |
In this process, the CPU is allocated to the processes for a specific time period. | In this process, CPU is allocated to the process until it terminates or switches to the waiting state |
Pre-emptive algorithm has the overhead of switching the process from the ready state to the running state and vice-versa | Non-pre-emptive Scheduling has no such overhead of switching the process from running into the ready state. |
Advantages of Pre-emptive Scheduling
Here, are pros/benefits of Pre-emptive Scheduling method:
- Pre-emptive scheduling method is more robust, approach so one process cannot monopolize the CPU
- Choice of running task reconsidered after each interruption.
- Each event cause interruption of running tasks
- The OS makes sure that CPU usage is the same by all running process.
- In this, the usage of CPU is the same, i.e., all the running processes will make use of CPU equally.
- This scheduling method also improvises the average response time.
- Pre-emptive Scheduling is beneficial when we use it for the multi-programming environment.
Advantages of Non-pre-emptive Scheduling
Here, are pros/benefits of Non-pre-emptive Scheduling method:
- Offers low scheduling overhead
- Tends to offer high throughput
- It is conceptually very simple method
- Less computational resources need for Scheduling
Disadvantages of Pre-emptive Scheduling
Here, are cons/drawback of Pre-emptive Scheduling method:
- Need limited computational resources for Scheduling
- Takes a higher time by the scheduler to suspend the running task, switch the context, and dispatch the new incoming task.
- The process which has low priority needs to wait for a longer time if some high priority processes arrive continuously.
Disadvantages of Non-Pre-emptive Scheduling
Here, are cons/drawback of Non-Pre-emptive Scheduling method:
- It can lead to starvation especially for those real-time tasks
- Bugs can cause a machine to freeze up
- It can make real-time and priority Scheduling difficult
- Poor response time for processes
Example of Non-Pre-emptive Scheduling
In non-pre-emptive SJF scheduling, once the CPU cycle is allocated to process, the process holds it till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and arrival time.
Process Queue | Burst time | Arrival time |
P1 | 6 | 2 |
P2 | 2 | 5 |
P3 | 8 | 1 |
P4 | 3 | 0 |
P5 | 4 | 4 |
Step 0) At time=0, P4 arrives and starts execution.
Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to complete. It will continue execution.
|
Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will continue execution.
|
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is compared. Process P1 is executed because its burst time is less compared to P3.
|
Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue execution.
|
Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue execution.
|
Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is compared. Process P2 is executed because its burst time is the lowest.
|
Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.
|
Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is compared. Process P5 is executed because its burst time is lower.
|
Step 9) At time = 15, process P5 will finish its execution.
|
Step 10) At time = 23, process P3 will finish its execution.
|
Step 11) Let's calculate the average waiting time for above example.
Wait time
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2
Example of Pre-emptive Scheduling
Consider this following three processes in Round-robin
Process Queue | Burst time |
P1 | 4 |
P2 | 3 |
P3 | 5 |
|
|
Step 1) The execution begins with process P1, which has burst time 4. Here, every process executes for 2 seconds. P2 and P3 are still in the waiting queue.
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing
Step 3) At time=4 , P2 is pre-empted and add at the end of the queue. P3 starts executing.
Step 4) At time=6 , P3 is pre-empted and add at the end of the queue. P1 starts executing.
Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution
Step 6) P2 has a burst time of 3. It has already executed for 2 intervals. At time=9, P2 completes execution. Then, P3 starts execution till it completes.
Step 7) Let's calculate the average waiting time for above example.
Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7
- In Pre-emptive Scheduling, the CPU is allocated to the processes for a specific time period, and non-pre-emptive scheduling CPU is allocated to the process until it terminates.
- In Pre-emptive Scheduling, tasks are switched based on priority while non-pre-emptive Scheduling no switching takes place.
- Pre-emptive algorithm has the overhead of switching the process from the ready state to the running state while Non-pre-emptive Scheduling has no such overhead of switching.
- Pre-emptive Scheduling is flexible while Non-pre-emptive Scheduling is rigid.
Key takeaway
What is Pre-emptive Scheduling?
Pre-emptive Scheduling is a scheduling method where the tasks are mostly assigned with their priorities. Sometimes it is important to run a task with a higher priority before another lower priority task, even if the lower priority task is still running.
At that time, the lower priority task holds for some time and resumes when the higher priority task finishes its execution.
What is Non- Pre-emptive Scheduling?
In this type of scheduling method, the CPU has been allocated to a specific process. The process that keeps the CPU busy will release the CPU either by switching context or terminating.
It is the only method that can be used for various hardware platforms. That's because it doesn't need specialized hardware (for example, a timer) like pre-emptive Scheduling.
Non-Pre-emptive Scheduling occurs when a process voluntarily enters the wait state or terminates.
Process Scheduling is an OS task that schedules processes of different states like ready, waiting, and running.
Process scheduling allows OS to allocate a time interval of CPU execution for each process. Another important reason for using a process scheduling system is that it keeps the CPU busy all the time. This allows you to get the minimum response time for programs.
Process Scheduling Queues help you to maintain a distinct queue for each and every process states and PCBs. All the process of the same execution state are placed in the same queue. Therefore, whenever the state of a process is modified, its PCB needs to be unlinked from its existing queue, which moves back to the new state queue.
Three types of operating system queues are:
- Job queue – It helps you to store all the processes in the system.
- Ready queue – This type of queue helps you to set every process residing in the main memory, which is ready and waiting to execute.
- Device queues – It is a process that is blocked because of the absence of an I/O device.
Fig 1 - In the above-given Diagram,
- Rectangle represents a queue.
- Circle denotes the resource
- Arrow indicates the flow of the process.
- Every new process first put in the Ready queue. It waits in the ready queue until it is finally processed for execution. Here, the new process is put in the ready queue and wait until it is selected for execution or it is dispatched.
- One of the processes is allocated the CPU and it is executing
- The process should issue an I/O request
- Then, it should be placed in the I/O queue.
- The process should create a new subprocess
- The process should be waiting for its termination.
- It should remove forcefully from the CPU, as a result interrupt. Once interrupt is completed, it should be sent back to ready queue.
Two-state process models are:
- Running
- Not Running
In the Operating system, whenever a new process is built, it is entered into the system, which should be running.
The process that are not running are kept in a queue, which is waiting for their turn to execute. Each entry in the queue is a point to a specific process.
Here, are important objectives of Process scheduling
- Maximize the number of interactive users within acceptable response times.
- Achieve a balance between response and utilization.
- Avoid indefinite postponement and enforce priorities.
- It also should give reference to the processes holding the key resources.
A scheduler is a type of system software that allows you to handle process scheduling.
There are mainly three types of Process Schedulers:
- Long Term
- Short Term
- Medium Term
Long term scheduler is also known as a job scheduler. This scheduler regulates the program and select process from the queue and loads them into memory for execution. It also regulates the degree of multi-programing.
However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O jobs., that allows managing multiprogramming.
Medium-term scheduling is an important part of swapping. It enables you to handle the swapped out-processes. In this scheduler, a running process can become suspended, which makes an I/O request.
A running process can become suspended if it makes an I/O request. A suspended process can't make any progress towards completion. In order to remove the process from memory and make space for other processes, the suspended process should be moved to secondary storage.
Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to boost the system performance according to set criteria. This helps you to select from a group of processes that are ready to execute and allocates CPU to one of them. The dispatcher gives control of the CPU to the process selected by the short term scheduler.
Long-Term Vs. Short Term Vs. Medium-Term
Long-Term | Short-Term | Medium-Term |
Long term is also known as a job scheduler | Short term is also known as CPU scheduler | Medium-term is also called swapping scheduler. |
It is either absent or minimal in a time-sharing system. | It is insignificant in the time-sharing order. | This scheduler is an element of Time-sharing systems. |
Speed is less compared to the short term scheduler. | Speed is the fastest compared to the short-term and medium-term scheduler | It offers medium speed. |
Allow you to select processes from the loads and pool back into the memory | It only selects processes that is in a ready state of the execution. | It helps you to send process back to memory. |
Offers full control | Offers less control | Reduce the level of multiprogramming |
It is a method to store/restore the state or of a CPU in PCB. So that process execution can be resumed from the same point at a later time. The context switching method is important for multitasking OS.
- Process scheduling is an OS task that schedules the processes of different states like ready, waiting, and running.
- Two-state process models are 1) Running, and) Not Running
- Process scheduling maximizes the number of interactive users, within acceptable response times.
- A scheduler is a type of system software that allows you to handle process scheduling.
- Three types of the scheduler are 1) Long term 2) Short term 3) Medium-term
- Long term scheduler regulates the program and select process from the queue and loads them into memory for execution.
- The medium-term scheduler enables you to handle the swapped out-processes.
- The main goal of short term scheduler is to boost the system performance according to set criteria
- Long term is also known as a job scheduler, whereas the short term is also known as CPU scheduler, and the medium-term is also called swapping scheduler.
Key takeaway
Process Scheduling is an OS task that schedules processes of different states like ready, waiting, and running.
Process scheduling allows OS to allocate a time interval of CPU execution for each process. Another important reason for using a process scheduling system is that it keeps the CPU busy all the time. This allows you to get the minimum response time for programs.
References:
1. Operating Systems –A Concept Based approach –Dhananjay M Dhamdhere (TMGH).3rdedition.
2. Operating System Concepts –Abraham Silberschatz, Peter B. Galvin &Grege Gagne(Wiley)
3. UNIX Concepts and Applications –Sumitabha Das(TMGH).
4. Operating System: Concepts and Design –Milan Milenkovic (TMGH)
5. Operating System with case studies in Unix, Netware and Windows NT –Achyut S. Godbole (TMGH).