Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
A CPU is one primary of a computer resource and a process is a core part of CPU scheduling. It becomes significantly important to accomplish the operating design goal. CPU scheduling is the basic of multi-programmed operating systems by switching the CPU among the process. The purpose is that allow the process to run as many as possible at all time to increase the better performance. The Scheduling is managed queue deciding which process have to execute next to achieve a high-efficiency level. Scheduling algorithm deals on how to minimize queuing delay and optimize the performance of the queuing environment. CPU scheduling helps to increase the system performance according to the chosen algorithm. The powerful competent of CPU scheduling depends on how the algorithm designs and match with the scheduling purpose. The high efficient CPU scheduling algorithm depends on the design which one will suit the scheduling goals.
In this paper, we will present a variety of scheduling algorithms such as FCFS, SJF, priority, round robin, multilevel queue, and multi feedback. We will use a state diagram that visualizes the different of the various scheduling algorithms and present which algorithm is work best for a particular situation. The state diagram representation is much easier to understand what it is going on inside the scheduling structure and why each algorithm has a different set of process. We will examine the difference in their performance in terms of throughput and waiting time. The purpose of this study is to select the appropriate operating system for a particular situation.
One of the underlying concepts in the operating system is scheduling. Most of all, the computer resources have been scheduled before being applied. The scheduling is a center of the operating design.
The scheduler manages the decision which one run first to the operating system. Scheduling can be defined as a mechanism or a tool to control the execution of a number of processes performed by a computer. CPU is the most important of all the resources available in a computer system that are scheduled before use. The basic idea is to keep the CPU busy as much as possible by executing a process, and then switch to another process.
There are several different criteria to select which one is best for scheduling algorithm in a particular situation
1. CPU utilization: The maximum use of CPU when it is busy.
2. Throughput: The number of processes that complete their execution per unit time.
3. Turnaround Time: The amount needed for execution of a single process.
4. Waiting Time: The amount of time a process waits in the ready queue.
5. Response Time: The amount of time takes in an interaction program to start responding to the request.
Scheduling divides into two categories.
- Nonpreemptive scheduling
When a process terminates or process switches from running state to waiting for the state, no preemptive scheduling is used. It chooses a process to run and then lets it run until it blocks or until it is released voluntarily by the CPU, that is until it has finished the first task or job, e.g. SJF, FCFS, priority, and so on.
- Preemptive scheduling
Preemptive scheduling is used when the process switches between running and ready state, or between waiting and ready state. In this kind of planning process, execution can be preempted before process are completed and a certain another process can start execution whose priorities are higher than the process which was first arrived in the CPU, for example. Round Robin, Priority.
States execution in the CPU Scheduler.
1. When the process is switched from running to waiting for the state.
2. If a process moves from the running state to the ready state.
3. When a process changes from the waiting state to the ready state.
4. If a process is terminated.
- First Come First Serve Scheduling (FCFS)
The first process is granted to the simplest scheduling algorithm and it is first accepted. This is called first come first served or FCFS scheduling. The processes are inserted into the tail of a queue. When each finish running, the next processes are taken from the head of the queue.
- The prioritization does not occur and allow every process eventually complete, which intended no starvation occur.
- Turnaround time, waiting time and response time is high.
- The process with the longest burst time can control CPU, even if other process burst time is too short. It caused the throughput is low.
- Shortest Job first Scheduling
The Shortest Job First Scheduling is a type of scheduling which first selects the least time to perform. The scheduler organizes the processes in the queue head with the least burst time and the longest burst time in the queue tail. This requires advanced expertise or estimates of the time necessary to complete a process. This algorithm designs in most scenarios to lower the high throughput.
- The major obstruction with SJF algorithm is unknown the burst time and OS may not be able to sort.
- Starvation may occur especially in a busy system, the several small burst time processes is executed continually.
- SJF minimize the average waiting time but the serious liability may occur since SJF algorithm may not serve or acknowledgment the large process burst time and tends to leave in the ready list while the small process burst time kept continually executed.
- Round Robin Scheduling
The Round Robin (RR) scheduling algorithm is designed for time-sharing systems. It selects a fixed time quantum unit. RR is similar to FCFS but preemption adds when switching to the next process.
From the figure, queue stores in ready processes. Scheduler allocated each process to CPU for a time interval of assigned quantum. The new process is followed to the tail of the queue.
- It is slow down the short processes to share time with other processes instead finished it quickly and caused lower the CPU efficiency with more overhead in context switching.
- The response time may be poor if the quantum unit is too long.
- The problem with starvation is solved since every process gets to execute.
- Priority Scheduling
The priority scheduling algorithm is the most common scheduling algorithm in a batch system. It can be either preemptive or non-preemptive. Each process is assigned by the priority. Lower priority process get execute last and the higher priority process gets execute first. In the situation when the processes have equal priority, it executes with the FCFS scheduling algorithm.
- The lower priority may not get execute and caused starvation in the lower priority process.
- The equal priority processes add more the waiting time.
- Higher priority processes are always available in a queue and have smaller waiting time and response time.
- Multilevel Queue Scheduling
Multilevel Queue Scheduling is created to classify processes into different groups depending upon their situation. The common division is made between processes is foreground processes (interactive) and background processes (batch). These types of processes have different response-time requirements and scheduling needs. In addition, foreground processes may have priority over background processes. In multilevel queue scheduling algorithm partitions the ready queue into several separate queues. Each queue has its own scheduling algorithm. The foreground queue uses RR scheduling and background queue uses FCFS scheduling.
- Fixed-priority pre-emptive scheduling: In this, each queue has absolute priority over lower priority queue. There is a possibility of starvation since the high priority queue always run first and the low priority queue may not execute at all.
- Time slicing: each queue gets a certain portion of CPU time and can use it to schedule its own process. For example, queue1 can be given 80% of the CPU time for RR scheduling among its processes, whereas queue2 receives 20% of the CPU time for FCFS scheduling among its processes
- Multilevel Feedback Scheduling
Multilevel feedback scheduling algorithm allows the process to move between the queues, unlike multilevel queue scheduling algorithm that fixed queue to assigned processes. The idea is to separate processes according to their CPU burst characteristic. If the lower priority queue is waiting in a long period, it can switch to a higher priority queue. This scheduling algorithm is prevented starvation by using aging.
- Multilevel feedback scheduling has more flexible than a multilevel queue.
- Multilevel feedback scheduling constantly analyzed the behavior or time of execution of the process according to which it changes a priority.
- Multilevel feedback scheduling also reduces the response time.
- Multi-Processor Scheduling
The multi-processing system is available in term of load sharing but it has more complex compared with a single processor. The first approach multi-processor scheduling is called master processor. One processor gets the responsibility of I/O processing and system activities and the other processor executes only the user code. This approach is known as asymmetric multiprocessing. This approach is reducing the need for data sharing. The second approach is called symmetric multiprocessing. Every processor has a self-scheduling. Each process may have its own private queue of processes or all processes may be in a common ready queue. The scheduler for each processor examines the ready queue and select a process to run.
Affinity processor is an option given to system users who want to run their process. For example, the user may decide that they only want to run their process on CPU 0 and no other CPUs. Now it’s harder to implement processor affinity in this particular scheme.
There are two types of processor affinity:
1. Soft Affinity: If an operating, system attempts to maintain a process on the same processor but does not ensure it does so.
2. Hard Affinity: Certain systems, such as Linux, also offer some system calls to support hard affinity to migrate between processors.
Load Balancing is the phenomenon that evenly retains the workload in an SMP system. Load balancing is only required on systems where each processor is able to execute its own private process queue.
There are two general approaches to load balancing:
1. Push-Migration: The load of each processor is routinely checked and if an imbalance occurs then each processor is loaded evenly by moving the processes from overloading into idle processors or less occupied processors.
2. Pull Migration: Pull Migration takes place when a silent processor pulls a waiting task from a busy processor.
- Comparison of the various kinds of a single CPU scheduling algorithm
Figure from Somani Jayashree, C. P. (2013)
The solution from the first come first serve scheduling algorithm is the simplest to understand but it suits only a batch system whereas the shortest job first scheduling can deal with a different approach. It increases the waiting time for a long process. The priority-scheduling algorithm can create starvation that in the highest priority can always execute whereas the lowest priority may not be able to run. The round robin scheduling algorithm is preemption based on the FCFS policy and time unit quantum. RR is suitable for a time-sharing system. The multilevel queue scheduling algorithm is fixed to select a process based on the priority queue. It created a similar problem called starvation as well as the priority-scheduling algorithm. Multilevel feedback queue scheduling is solved a problem with starvation in multilevel queue scheduling by given the processes to move between queue depending the given quantum time.
The multiprocessing system is the best scheduling in term of short response time and large throughput. It could run different processes at the same time and could run on different processors in the same system. If the complexity is reduced, the multiprocessing system can increase better performance in process scheduling.
Goel Neetu, Garg R.B. (2012). A Comparative Study of CPU Scheduling Algorithms. International Journal of Graphics & Image Processing, 245-249.
Haris Muhammad, Maqsood Naila, Haq UbaidUl, Zaman Tahir, Zubair Muhammad. (2016). UNI Processor and Multi-processor Performance Comparison. International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE), 77.
Jayashree Somani, Pooja Chhatwani. (2013). Comparative Study of Different CPU. Journal of Computer Science and Mobile Computing, 312-317.
Kishor Lalit, Goyal Dinesh. (2013). Comparative Analysis of Various Scheduling Algorithms. International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), 1-3.
Noon Abbas, Kalakech Ali, Kadry Seifedine. (2011). A New Round Robin Based Scheduling Algorithm for Operating. IJCSI International Journal of Computer Science, 225.
Silberschatz Abraham. (2013). Operating System Concepts Ninth edition. Hoboken.
Singh Pushpraj, Singh Vinod, Pandey Anjani. (2014). Analysis and Comparison of CPU Scheduling Algorithms. International Journal of Emerging Technology and Advanced Engineering, 1-3.
Suranauwarat, S. (2007). A CPU Scheduling Algorithm Simulator. ASEE/IEEE Frontiers in Education Conference, F2H19-20.
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: