Process Of The Operating System Computer Science Essay

Published:

An operating system is a program that manages the hardware and software resources of a computer which is loaded into memory when you switch on your computer. Without the operating system (GUI), each programmer would have to create a solution in order for a program will display text and graphics on the monitor. Programmers will have to create a way to display information on monitor, tell it how to read a data file, and how to deal with other programs.

Earlier computers were not as powerful as they are today. This can be shown when old computer systems are unable to run more than one program at a time. Multitasking at that period of time is absolutely impossible. On the other hand, current operating systems are capable of handling two multiple applications at the same time. In fact, if a computer does not have these capabilities, it would be considered useless by most computer users. In order for a computer to handle multiple applications simultaneously with ease, there must be a more arranging ways of using the CPU.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

When a process is created its state is set to new but once the process is ready to use the CPU its state is set to ready. Then it is inserted into the ready queue waiting for its turn to be assigned CPU time so that its instructions can be executed. Once the CPU is available, the process next in line in the ready queue is set to run. This shows that the process' instructions are being executed.

Once the process is being executed two action maybe carry out:

1) The process' instructions are all executed in which case its state will be set to terminated.

2) While the process is running an I/O interrupt or event wait is executed which stops the running program.

The program finishes executing and then terminates during the event which the first action takes place. This means the all the instructions of the process have been executed CPU is no longer needed. However, termination of program can also happen if error occurs in the program that forces the process to terminate prematurely.

In the second case, the actions taken are much more complex. An example to illustrate this is when there is a process that is currently occupying the CPU and as the instructions of this process are being executed, the program requires input from the user at the keyboard. This causes the process to stop executing. In this situation, the process will change its current state to the waiting state. At that moment, the process will lose control of the CPU and be sent into the waiting queue. However, once the input is received from the user, the processes will return to the ready state. The process must wait in the ready queue until it is assigned the CPU as it cannot hold of the processor.

Once the process is assigned the CPU, it will them continue executing its instructions. Once again two things may happen. If there is need for more I/O then the process will once again enter into the waiting state. If not, then the process will complete and will become terminated once the final instructions are executed.

As stated earlier a process may enter several states in its lifetime but where is all this information stored? The answer is in the process control block (PCB). The process control block is a representative of each process. It contains information about the process state, program counter, CPU registers, CPU scheduling information, memory management information, accounting information and I/O status information.

CPU scheduling information is information that includes process priority, pointers to scheduling queues, and any other scheduling parameters. This is the basis of multi programmed operating systems. Because the CPU is switchable from process to process, the operating system is able to make the running programs seem as if they are being executed simultaneously.

CPU cycles will be wasted whenever the CPU has to wait for an I/O operations to occur. The idea behind CPU scheduling is to allow switching from process to process when the CPU becomes idle. This ensures the CPU does not sit in idle mode while a process is waiting for an I/O input. This way, it can start execute other processes that are in the waiting state.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Non preemptive scheduling is a scheme where once a process has control of the CPU; no other processes can preemptively take the CPU away. The CPU is retained until either the process terminates or enters the waiting state. There are two algorithms that can be used for non-preemptive scheduling. The first is the First Come, First Served algorithm is one of them. In this scheduling algorithm, the CPU is requested by the first process which is the one that is allocated the CPU first.

The First Come, First Served algorithm is very simple to implement and it can be controlled using a First In, First Out (FIFO) queue. The CPU is allocated to the first process waiting in the FIFO queue when the CPU is free. Once that process finishes, the CPU will return to the queue and choose the first task in the queue. Any process that requests the CPU must go to the back to the queue.

The second non preemptive scheme is the Shortest Job First (SJF) scheduling algorithm. In this scheduling scheme, the process with the shortest next CPU burst will get the CPU first. By moving all the short jobs ahead of the longer jobs, we can decrease the average waiting time. However, the flaw of this is that it is impossible to know the length of the next CPU burst. The only solution is by estimating the value.

Another implementation of the SJF is the priority scheduling algorithm which in this a priority is associated with each process. This depends on the implementation of the algorithm and there can be a range of priorities. The job that has the highest priority will be the one that is selected from the ready queue while the lower prioritized jobs will have to wait.

The major problem of SJF is starvation where starvation happens when a process that is ready to be executed is not as it is still waiting for the CPU. This can be illustrated using the different priority of jobs. Highly prioritized jobs will continue to be executed if further inserted jobs into the ready queue are also highly prioritized. This results the low priority job unable to receive CPU.

The solution to this problem is aging where aging increases the priority of a process gradually and thereby guaranteeing that the process will be executed at some point.

Preemptive scheduling is the second scheduling scheme. In preemptive scheduling, there is no guarantee that the process using the CPU will keep it until it is finished as the running task may be interrupted and rescheduled by the arrival of a higher priority process. There are two preemptive scheduling algorithms that are preemptive which are the Round Robin (RR) and the Shortest Remaining Time First (SRTF)

The Round Robin scheduling scheme is similar to that of FCFS except preemption is added to it. In the RR scheduling scheme the CPU picks a process from the ready queue and sets a timer to interrupt after one time quantum. During this scheme two things may happen

1) The process may need less than one time quantum to execute.

2) The process needs more than one time quantum.

In the first case, a process executes when CPU is allocated to the process. The process gives up the CPU freely as the time required by the process is less than one time quantum. This causes the scheduler to select another process from the ready queue.

In the second case, if a process requires more than one quantum time to execute it must wait. However in the RR scheme, each process is given only one time quantum and the only way for the process to gain access to the CPU for more than one quantum time is if and only if it is the only process left. If that is not the case, then the process will be interrupted by the timer after one time quantum. This will cause the process to go to the end of the ready queue while the next process in line will be allocated the CPU and will be allotted one time quantum.

In the Shortest Remaining Time First (SRTF) algorithm, the process that is running is compared to the processes in the ready queue. If a process in the ready queue is shorter than the process running, then the running task is preempted and the CPU is given to the shorter process until it is finished. This algorithm is similar to the SJF algorithm except that preemption is added.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

The operating system has grown very powerful throughout the decades. It started from handling I/O operations to utilize all the resources of the computer. It has become the middle man between the user and the hardware by providing us with a beautiful interface and allowing us to run programs with just a single click of a button.

Time sharing between processes would be impossible without the operating system. Because of OS like Linux and Windows XP we are able to run multiple programs simultaneously without having to worry about conflict. The operating system has become an essential part of our everyday lives as we can observe more and more devices loaded with an OS. In fact, I would not be surprised to see our future electrical appliances equipped with an OS.