Process Control Management And Cpu Scheduling Computer Science Essay

Published:

Application programs are typically divided into processes during the time of execution, every single process deals with executing a specific function from the overall program.

A program is a passive entity stored in the secondary storage until it is launched, and a process can be thought of as a program in action. A process actually refers to a program in execution, in other words we can say that a process is that part of the application program that is currently residing within the CPU.

Process Control Management or CPU Scheduling can only be achieved by the use of some very special and complex programs which are known as CPU Scheduling Algorithms.

These algorithms mainly deal with deciding the priority of the processes which are ready to be transferred into the running state. There are many different criteria in which these algorithms use to decide which tasks to be executed first.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

There are mainly two categories of CPU Scheduling Algorithms:

Pre-emptive Scheduling Algorithms

Preemptive scheduling algorithms are the kind of algorithms that can terminate a process that is currently running within the CPU in order to run another process that joined the queue but has a higher priority that the one that is being processed at the moment.

In short, the process with the highest priority will be processed upon arrival even though there is a process that is currently in the CPU. After the process with the highest priority has already been completed then second highest priority will be processed. The process with the highest priority should always be the one currently using the processor.

If a process is currently using the CPU and a new process with a higher priority enters the ready list/queue the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system.

Non Pre-emptive Scheduling Algorithms

Non-preemptive scheduling algorithms are the second category of scheduling algorithms; they work relatively different compared to Pre-emptive scheduling algorithms.

Non-preemptive scheduling algorithms are designed so that once a process enters the running state within the CPU; it is not removed from the processor until it has completed its service time.

Once the process enters the CPU, the process cannot be terminated by the CPU, the process will hold the CPU until its allocated time (CPU Burst Time) is over. When the allocated time is over, the process will automatically exit from the CPU.

Examples of Non-preemptive Scheduling Algorithms: -

First Come First Served (FCFS)

SJF (Shortest Job First)

Kernel:

The kernel is a program that makes up the central core of a computer operating system. The Kernel has complete control over every activity that occurs within the system.

The kernel itself does not interact directly with the user, but rather interacts with other programs as well as with the hardware devices on the system, such as the processor, memory and disk drives.

The kernel is the first program of the operating system to be loaded into the memory during booting or in other words system startup, and then after it is loaded into the memory it remains there for the entire time during which the computer system is up because its services are required continuously throughout. Thus the Kernel is important to have as small size as possible while still being able to provide all the essential services required by the other parts of the operating system and also by the various application programs in the system.

Because of the very critical nature of the Kernel, the kernel code is usually loaded into a protected area of memory, hence, preventing it from being overwritten by other, less frequently used parts of the operating system or by any application program.

Common Parts of a Kernel:

Kernel space:

The Kernel space in most of the different types of Kernels performs tasks, such as process management, memory management, file management and I/O (input/output) management (i.e., accessing the peripheral devices).

In this chapter, Process Control Management, we are mainly going to focus on how the Kernel (Windows Vista Kernel) performs the process control management in a detailed and more specific manner.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

User space

The user space within the Kernel deals with managing of everything a user normally does, such as writing text in a text editor or running programs in a GUI (Graphical User Interface).

This separation of the Kernel into two parts is mainly made in order to prevent user data and kernel data from interfering with each other and thereby diminishing performance or causing the system to become unstable.

Contents of a Kernel:

The contents of a kernel vary considerably according to the operating system, but they typically include

A scheduler:

These are some very special and complex algorithms that determine how the various processes share the kernel's processing time (including in what order).

A supervisor:

This grants use of the computer to each process once it has been scheduled by the scheduler.

An interrupt handler:

This handles all the requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel's services.

A memory manager:

This allocates the system's address spaces (i.e., locations in memory) among all users of the kernel's services.

Types of Kernels:

There are mainly four different types of Kernels used by different operating systems, below is a list of the different types of kernels;

Monolithic Kernel

Micro Kernel

Hybrid Kernel - Windows Vista uses

Nano Kernel

Process Control Management used by Windows Vista Kernel

Windows Vista Operating System is a member of the Windows NT family of operating systems; Windows Vista uses a Hybrid Kernel which is made up of a scheduling algorithm that applies the pre-emptive process control management techniques.

More details will be provided later on in this chapter.

What is a Hybrid Kernel?

The hybrid kernel is actually a combination of two different types of kernels; it is the combination of Microkernel and monolithic kernel.

The Windows Vista Hybrid kernel's architecture actually consists of two main layers; User Mode and Kernel Mode.

Processes running in Kernel mode have direct access to hardware resources, while the ones running in User Mode need to make system calls (ask for permission) in order to gain access to the hardware such as Main Memory, Secondary memory and a like.

The user mode deals with all the processes that are run by the user (Application software), for example, when you want to save a document in a word processor, the program makes a system call to that part of the kernel which manages hard drive access, after which this access is granted or denied (in other words, the document is stored on the hard drive or not). Because hardware can in fact be damaged by software, hence, access to it is restricted in the above manner.

Windows Vista Hybrid Kernel [1] Diagram:

File:Windows 2000 architecture.svg

Windows Vista Hybrid Kernel: Scheduler

As discussed in few chapters above, schedulers/scheduling algorithms are some very special and complex programs that deal with all the Process Control Management of the system.

Windows Vista Operating System's Hybrid Kernel, applies the Pre-emptive Scheduling Algorithm.

This scheduler deals with determining which processes will run when there are multiple processes that can be run by the CPU.

Concept of Pre-emptive Scheduling:

STEP 1:

P1 to P5 are processes that are located within the Virtual memory and are ready to be processed by the processor.

P1 is the process that has the highest priority compared to all the other processes which are currently.

STEP 2:

Since the Windows Vista Hybrid Kernel uses the Pre-emptive scheduling algorithm, P1 is the process that will be sent to the CPU for processing first.

STEP 3:

Another process known as P6 enters the ready state, but P6 has a higher priority that all the other processes which are in the ready state waiting for the CPU, and P6 also has a higher priority than P1 which is currently using the CPU.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

STEP 4:

According to the Pre-emptive scheduling technique, the process P1 will be temporarily removed from the CPU and then P6 allowed to be processed by the CPU until its processing is complete.

Once P6 has been processed then the process with the second highest priority will be processed, in this example, P1 will be sent back to the CPU to finish up the processing.

Secondary Disk Scheduling Management

One of the main responsibilities of the operating system is to make sure that the hardware is used in the most efficient way possible.

When we talk about hardware we also consider the Hard Disks/Backing storage/Secondary Disk, which is where the Secondary Disk Scheduling Management comes into consideration.

How the Secondary Disk\Hard Disk works:

A hard disk drive is a collection of plates called platters. The surface of each platter is divided into circular tracks. Furthermore, each track is divided into smaller pieces called sectors. Disk input and output is done sector by sector. A group of tracks that are positioned on top of each other form a cylinder. There is a head connected to an arm for each surface, which handles all input and output operations.

For each Input/output request, the read/write head of that specific platter is selected. It is then moved over to the specific track. The disk/platter is then rotated to position the desired sector exactly underneath the head and finally, the read/write operation is performed.

Main objectives of Disk Scheduling:

There are two objectives for any disk scheduling algorithm:

Minimize the throughput:

Throughput is the average number of requests satisfied per time unit.

Maximize the response time

Response time is the average time that a request must wait before it is satisfied.

Types of Disk Scheduling Algorithms:

FCFS Scheduling (First Come, First Served):

In short, this is the type of Disk Scheduling Algorithm that performs operations in the order requested; no reordering of the work queue is done by this algorithm.

Source: http://www.scribd.com/doc/6735087/Disk-Scheduling

SSTF Scheduling (Shortest Seek Time First):

This Disk Scheduling Algorithm's main concept is selecting the request with the minimum seek time from the current read/write head position.

After a request is completed, it will instruct the head to go to the closest request in the work queue, regardless of direction.

Source: http://www.scribd.com/doc/6735087/Disk-Scheduling

SCAN Scheduling Algorithm:

SCAN Scheduling Algorithm instructs the read/write head to go from the outside to the inside servicing requests and then back from the outside to the inside servicing requests, this action is repeated over and over again.

The head starts moving from one end of the platter towards the other end servicing requests until it gets to the other end of the disk, where the movement of the head is reversed and servicing continues.

(Source: http://www.scribd.com/doc/6735087/Disk-Scheduling)

C-SCAN (Circular SCAN) Scheduling Algorithm:

C-SCAN is a Scheduling Algorithm which has a lot of similarities with the SCAN Scheduling Algorithm, but also has quite significant differences as well.

The head starts moving from one end of the platter towards the other, servicing requests as it moves. When it reaches the other end, of the platter, it immediately returns to the beginning of the disk, without servicing any request on the return journey; that is the major difference between C-SCAN and SCAN Scheduling Algorithm.

(Source: http://www.scribd.com/doc/6735087/Disk-Scheduling)

LOOK Scheduling Algorithm:

LOOK Scheduling Algorithm is actually like a version SCAN Scheduling Algorithm but the main difference is that the read/write head stops moving inwards (or outwards) when no more requests are located in that direction.

C-LOOK Scheduling Algorithm:

We can also call C-LOOK as a different version of C-SCAN Scheduling Algorithm, but the main difference is that the read/write head only moves as far as the last request in both directions, then reverses immediately, without first going all the way to the end of the platter.

(Source: http://www.scribd.com/doc/6735087/Disk-Scheduling)

Secondary Disk Scheduling Management in Windows Vista

Windows Vista Operating System supports many different types of Secondary Disks/Hard Disk Drives (HDD), it should be very important to note that the scheduling algorithm used is mainly based on the type/model of the Hard Disk Drive.

The main types of Hard Disk Drives use the SCSI, Parallel ATA, and Serial ATA interfaces.

In this report the Hard Disk Drives which use the SATA Interface Queuing Command is the one which is to be discussed in more detail.

What is Serial ATA (SATA)?

SATA is an interface standard for the connection of storage devices such as hard disks, solid-state drives, and CD-ROM drives in computers, SATA is an evolutionary replacement for the Parallel ATA physical storage interface and the SCSI Interface.

Native Command Queuing (NCQ):

Command Queuing improves the performance of the hard disk drive when the PC sends a series of commands to read sectors which are distant from each other. The hard disk drive takes these commands and reorders them, in order to read the maximum possible data within one rotation of the disc.

Native Command Queuing allows the PC to issue multiple commands to the drive, the drive then optimally re-orders the read commands; the re-ordering is based on the scheduling algorithm concepts discussed above i.e. FCFS, SSTF, SCAN, C-SCAN, LOOK, and C-LOOK.

Only the device can optimally reorder the incoming commands since it is only the device that knows disk organization and angular positioning of the sectors within the platters.

For example, the PC asked the hard disk drive to read A, B, C and D positions of the disc. Without any Command Queuing feature, the hard disk drive will take many more spins of the disc to read all requested data. With Command Queuing, the hard disk drive will re-order the commands using the most suitable scheduling algorithm discussed above, in order to minimize the number of disk spins, which will eventually reduce the seek time and response time.

The SATA Hard Disk Drives are designed specifically for the NCQ; NCQ is designed to improve performance and reliability as the transactional workload increases, when an application sends multiple commands to your drive, your drive can optimize the completion of these commands to reduce mechanical workload and improve performance.

The usage of NCQ results into higher performance especially in heavy transactional workloads usually found in high performance workstations, network servers, multi-media servers and editing workstations.

NCQ can also improve the overall system performance from booting of the computer system to copying of files.