This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In general, software can be divided into two main categories, System software and Application software. Application software includes programs that perform user tasks. In other words, application software are designed to be used by end users and do not interact directly with the hardware of a system. Examples of application software are databases, MS-Office, media player, games and so on. In contrast to application software, system software directly interacts with the hardware of a system in order to execute applications. It includes various programs that are required for a computer or system to function. These include Operating System (OS), drivers for hardware devices, compilers, debuggers, linkers and so on. System software is designed to be used by a system itself and not for end users.
An OS is a program that acts as an intermediary between the user of a computer and the computer hardware. An OS shields low-level information of the machine operation and provides frequently needed services to the end user. It directs the processor in the use of system resources and uses the computer hardware in an efficient manner
An OS has the following few main responsibilities:
It executes or performs basic tasks such as input and output operations, keeping track of file and directories on the disk and controlling peripheral devices, such as printers, scanners and disk drives.
It ensures that no interference occurs between different programs running at the same time.
It provides platform for other third party application softwares.
The listed responsibilities address the requirement for managing the hardware and application programs. It also focuses on providing an interface between the hardware and the application software for efficient execution of the software.
Definition and Function of OS
An OS integrates the end user and the hardware of the system and this enables the user to perform single or multiple tasks together. It acts as an intermediary between the hardware and the end user. An OS resides on top of the hardware layer as illustrated in Figure 1.1.
Figure 1.1: Overview of an OS
An OS has the following key elements:
A technical layer of software for driving the hardware of the computer, such as disk drives, keyboard and the screen.
A file system that helps organise files in a logical way.
A simple command language that enables users to run their own programs and manipulate their files in a simple way. Some OSs also provide text editors, compilers, debuggers and a variety of other tools.
Since the OS is in charge of a computer, all requests to use its resources and devices need to go through the OS. Therefore, an OS provides legal entry points into its code for performing basic operations, such as writing to devices.
OSs may be classified in two ways, based on the number of tasks they can perform simultaneously and based on the number of users using the system concurrently. Therefore, an OS can be single-user or multi-user and single-task or multi-tasking. An OS performs various functions to act as an intermediary between a user and the hardware of a system. Most of the OSs available today perform the following key functions to provide the required services to the users:
Management of the Processor: An OS is responsible for allocating various tasks of different programs to the processor.
Management of Memory: An OS also allocates the main memory and secondary storage areas to data, user programs and system programs.
Management of Input/Output (I/O): An OS allows unification and control of access of programs to different input and output devices while programs are being executed.
Execution of Applications: An OS is responsible for smooth execution of applications by allocating the resources required for them to operate.
Management of Security: An OS manages the security of a system by allowing authorised programs to execute and authorised users to perform.
File Management: An OS maintains a-file system so that users can perform file creation, modification and deletion tasks.
Information Management: An OS provides a certain number of indicators (such as debugger) that can be used to diagnose the correct operation of the machine.
Priority System: An OS also specifies and maintains the sequence in which tasks need to be executed in a system.
Types of Operating Systems
To understand the key features of an OS and its architecture, it is important to consider various types of OSs that were used earlier and are being used today. OSs can be categorised as follows:
Simple Batch Processing
1.3.1 Simple Batch Processing System
Earlier, computers ran from console mode; through character user interface. In these systems, the user was required to provide commands to system to execute various task and programs. Also, user was not able to directly access different I/O devices. Users made a job consisting of programs, data and control information which was then submitted to an operator for execution on the computer system. Generating the output took time based on the type of job submitted. The end user had to collect the output from the operator. The OS was very simple and its main task was to transfer control from one job to another. The OS resided in the memory of the computer system as illustrated in Figure 1.2.
Figure 1.2: Memory Layout of a Simple Batch Processing System
To speed up the execution, jobs of similar kinds were batched together for group execution. For example, all the programs written in COBOL or FORTRAN were processed as a batch. All the jobs were collectively executed.
In these systems, the speed for processing job increased. However, the CPU was often idle because of the inconsistency which existed between the operating speeds of CPU and I/O devices. CPU functions in microsecond/nanosecond whereas I/O devices work with speed in second/minutes. To solve this issue, which was slowing down the processing speed, spooling mechanism was devised.
Spooling mechanism means sending data or processes to a temporary storage area. This temporary work area can be a storage disk. While performing a job, the data is read directly from the disk. Spooling mechanism utilizes the disk as a large buffer to read ahead as an input device and also for storing the output till the time devices are available to process them.
For example, when a print command is given by a user, the output is first written in the buffer on a disk and then printed later. Especially, this applies to printing of large files. Spooling overlaps computation of other tasks with I/O of one job. Spooler may read the input from one job, print the output of second job and simultaneously process a third job as illustrated in Figure 1.3.
Figure 1.3: Spooling Mechanism
By allowing faster CPU and slow I/O devices to work at a higher operating rate, spooling speeds up the performance of a system.
1.3.2 Multiprogrammed Batched OS
With spooling another aspect of processing was introduced, called job scheduling. If several jobs are ready and are to be allocated to memory for execution, the system analyses the availability of space. If enough space is not available, it organises and schedules the jobs to be executed. This process is called job scheduling. The most important feature of job scheduling is the ability to multiprogram. In general, it is not possible for a single user to keep either the I/O devices or the CPU busy at all the time. Multiprogramming increases the CPU utilisation by organising various jobs so that the CPU always has one task to execute.
Figure 1.4: Multiprogrammed System Memory Layout
As illustrated in Figure 1.4, the OS queues up several jobs in memory simultaneously. Then, the OS selects and executes one of the jobs in the memory; this job sometimes has to wait till the I/O operation gets completed. CPU, instead of being in idle mode switches to another job in the memory. This is in contrast to non-multiprogrammed systems where the CPU remains idle. Again, when this job needs to wait, the CPU switches to another job and so on. Eventually, the first job gets executed and gets back the CPU for further processing. Choosing one out of the several jobs ready for execution in memory by the CPU is called CPU scheduling.
In Multiprogramming, an OS is required to take decision for all the users. All jobs meant for processing are kept in a job pool. When the OS selects a job from the job pool, it loads that job into the memory for execution. With multiple jobs executing concurrently, it is also essential to limit their ability to affect one another.
1. A job pool is an area on disk where all the processes are queued for allocation to main memory for execution.
2. Multiprogrammed batch systems are suited for executing large jobs that require minimum or no user interaction.
1.3.3 Time Sharing Systems
A multiprogrammed batch system provides an environment where various system resources are utilised effectively. However, a user cannot interact directly with the system, which means that instructions cannot be given directly to either the OS or a program. Time sharing or multitasking is a logical extension of multiprogramming, which provides interactive use of the system and users are able to give instructions to the OS or to the program directly by using input devices.
A time-shared OS allows many users to use and share a computer, simultaneously. For example, the OS of a server may allow many users to use a particular program concurrently. In a time sharing system, less CPU time is required by each user for executing actions or commands. The OS utilises CPU scheduling and multiprogramming to provide each user a time slot also known as time slice for execution of programs.
A program in execution is referred to as a process. A process may require more than one time slice for execution before it either finishes or needs to perform interactive I/O operations. Usually these operations are dependent on the speed of a user, for example, the input may be bound by the typing speed of a user which may be fast for user but slow for the system. When this interactive input takes place, instead of letting the CPU stay idle, the OS rapidly switches the CPU to a program of another user. As the system switches rapidly from one user to another, each user is given an impression that the entire system is dedicated to individual use.
Time sharing OSs are more complex than the multiprogrammed OS because several jobs are kept simultaneously in the memory. The jobs are interchanged in and out of the memory to the disk to obtain a reasonable response time. The disk serves as a backup storage for the main memory. However, there could be memory storage limitations. To overcome this, a technique called virtual memory is used. This technique allows the execution of a job that may not be completely in the memory. The main logic behind this technique is that programs can be larger than the physical memory. This arrangement frees the programmer from concerns over memory-storage limitations. Multiprogramming and time sharing are the central concept of all modern OSs.
1.3.4 Personal Computers OSs
Earlier, Personal Computers (PCs) also referred as desktop system had CPUs which lacked the features needed to secure an OS from user programs. PCs were designed for single user and single task OS assuming that only one user uses the machine and runs only single program at a time.
The OS of a desktop system consists of two parts, Basic Input Output system (BIOS), which is stored in Read Only Memory (ROM) and Disk Operating System (DOS). An example of such a system is MS DOS.
The BIOS performs a power-on self test when the power is turned on. This test checks the status of the memory whether it is working and also functioning of all other relevant units. After this, it reads a small portion of the OS known as boot from the disk and loads it into the main memory. This boot program then calls the rest of the OS and stores it in the main memory. This process is known as the booting process.
However, the goals of these OSs have changed with the time. Along with maximising peripheral and CPU utilisation, the OS maximises user convenience and responsiveness. Some examples of these systems are Microsoft Windows and Apple Macintosh.
PCs have benefitted from adopting technologies developed for mainframe systems such as multiprogramming, multitasking and time sharing.
1.3.5 Multiprocessor Operating System
Most systems are generally single processor system, which means that they have only one CPU. However, multiprocessor OSs have more than one processor. The processors share the same computer bus, the clock and sometimes memory and peripheral devices. These systems are also known as parallel systems or tightly coupled systems. Multiprocessor systems have the following main advantages:
Increased Throughput: Increase in number of processors also increases the amount of jobs done in a particular time.
Economy of Scale: Multiprocessor systems save cost because they share peripherals, mass storage and power supplies.
Increased Reliability: With multiprocessor system, the functions are distributed among several processors. Therefore, failure of one of the processors does not halt the system but only slows it down. Thus, the entire system runs with slower speed rather than failing altogether. This ability to provide service even if there is failure in one of the processors is known as fault tolerance.
The most widely used multiprocessor system uses Symmetric Multiprocessing (SMP). In SMP, the system runs an identical copy of OS and these copies communicate with one another as needed. Some systems use asymmetric multiprocessing in which each processor is assigned a specific task. The main processor controls the system and all the other processors either take instruction from the main processor or have predefined tasks. This defines a master-slave relationship. The master processor schedules and allocates work to the slave processors. This is achieved by designing hardware or software to perform the master-slave technique. Special hardware differentiates the multiprocessor or software is written to allow only one master and many slaves. For example, Sun OS version4 uses asymmetric multiprocessing.
1.3.6 Distributed Systems
In distributed systems, all computation is distributed among several processors. Distributed system depends on networking for its functionality. Networking provides the channel to communicate and be able to share computational tasks and to provide various features to the users. A network, in simple terms is a communication path between several systems.
The processors in a distributed system vary in size, function and are referred to as sites, nodes and computers. The following are the main reasons for designing a distributed system:
Computation: A particular computation is partitioned into a number of sub computations that can execute concurrently. In addition to this, a distributed system allows us to distribute the computations among various nodes. This is known as load sharing.
Resource Sharing: Various users can share the resources.
Reliability: These systems offer reliability because if one site fails, the remaining node shares the work.
Communication: When many sites are connected to one another by a communication network, the processes at various sites can exchange information among them. For example, a user may send an email to another user at the same site or at a different network.
Distributed systems are also referred as loosely coupled systems as the processors do not share memory or clock, instead processors communicate with each other through various communication lines, such as bus, network cables or even telephone lines.
1.3.7 Real Time OSs
Real time systems are considered to be special purpose OSs as they are used when rigid time requirements are placed on the operation of a processor or flow of data. Therefore, these are often used as a control device in dedicated applications. For example, sensors send data to a computer which then analyses the data and accordingly instructs the system to take appropriate action. Systems that control scientific experiments, industrial control systems like smoke and fire sensors, medical imaging system use real time systems.
Real time systems are of two kinds, hard and soft. A hard real time system ensures that critical tasks are completed on time. Existing general purpose OS does not support hard real time functionality. A less restrictive type is soft real time system. In this system, critical real time tasks get priority over other less important tasks and retain that priority till its completion. Several areas where soft real time systems are used are multimedia, virtual reality and advance scientific projects such as exploration. Most current OSs, such as major version of UNIX and different versions of Window Server OS system use soft real time functionality.
1.4 Operating System Structure
An OS can be designed as a massive, jumbled collection of programs and processes without any structure. Any process can call any other process to request a service from it. A series of processes can be activated by execution of user commands. However, this kind of implementation may be suitable only for small OSs. It is not suitable for large OS because the lack of structure makes it extremely difficult to specify code, test and debug the system.
The design of a new large OS is a very critical task. The goal of the system has to be defined properly before the design begins. It is very important to consider the services that an OS provides along with the interface that is made available to the users and programmers. It is equally important to consider its components and interconnections. In this section, we will learn various design approaches needed to handle intricacies of an OS.
1.4.1 Layered Approach
As per the layered approach, an OS is divided into several layers and the functions of an OS are divided among these layers. Each layer has a well defined functionality and I/O with two adjoining layers. Modern Unix OS has a layered structure. This is illustrated in Figure 1.5.
Generally, the bottom layer is concerned with hardware and top layer is concerned with users. The overall functionality and features are determined and are separated into components that allow the programmers to hide information making the system more secure.
The modularisation or the layered approach of the system is designed so that each layer uses functions and services of only the low-level layers. This approach simplifies debugging and system verification. The first layer can be debugged without affecting the other layers and the system. Each layer in the layered structure can be designed, coded and tested independently. Therefore, it simplifies the design, specification and implementation of an OS. However, it is critical to that OS functions are assigned carefully to various layers because a layer can use only the functions provided by the layer beneath it.
1.4.2 Kernel-based Approach
Brinch Hansen suggested the design and structure for a kernel based system. A kernel is a collection of services over which the rest of the OS is built. This structure is illustrated in Figure 1.6.
Figure 1.6: Structure of a Kernel-based OS
The kernel provides an environment to build an OS where designers have considerable flexibility. It is the core or the central component of the system. In this approach, the policy and optimisation decision are not made at kernel level. These are left to the outer layer. OS is an orderly growth of software over the kernel where all decision regarding process scheduling, resource allocation and execution environment are made. Kernel allows dynamic creation and control of processes along with communication between them.
Note: Including too much functionality in a kernel can result in low flexibility at a higher level at the same time too little functionality can result in low functional support at higher level.
Microkernel: Carnegie Mellon University developed an OS called Mach that consisted of modular kernel using microkernel approach as shown in Figure 1.7.
Figure 1.7: Microkernel Approach
This method structures the OS by removing all non essential components from the kernel and implementing them as system and user level programs. As a result a small kernel is designed. Microkernels allow typical services to run in user space. It provides minimal services along with communication among various processes. The main task of the microkernel is to provide communication facility between client program and the various services that are running at user level. The client program and the service interact indirectly by exchanging messages with the microkernel.
One of the benefits of this approach includes the ease of expanding the OS. All new services are added to user space and do not require any modification of the kernel. When the kernel has to be modified, changes tend to be fewer because microkernel is a smaller kernel. It is easier to transfer it from one hardware design to another. The microkernel also provides more reliability and security since most of the services are running as user rather than kernel processes. If a service fails, the rest of OS remains untouched. Some examples of OS based on microkernel approach are, Minix, MERT and QNX.
1.4.3 Virtual Machines
Conceptually, a computer system consists of layers, hardware being the lowest layer. The kernel running at next level uses the hardware instructions to create a set of system calls to be used by outer layers. The system programmes above the kernel are able to use either system calls or hardware instructions and these programs do no differentiate between the two. Therefore, they are accessed differently. They both provide functionality that the program can use to create even more advanced functions. Systems programs in turn treat both hardware instructions and system call programs as one as if they were both at the same level.
Some systems carry this concept a step further by allowing the systems program to be called easily by the application programs. Although the systems programs are at higher level, the application programs may view everything under the hierarchy as if they were part of the system itself. This layered approach is taken into logical concept of a Virtual Machine (VM) as shown in Figure 1.8.
Figure 1.8: Structure of a VM System
A user can run a single OS on this VM. The VM concept provides higher flexibility wherein it allows different OSs to run on different VMs, for example, IBM 370 system. In this system, the VM software provides VM to each user for processing. When a user logs in, VM 370 creates a new VM for the user.
There are two primary advantages of VM. First, it completely protects the system resources. It provides a robust level of security. Second, it allows system development to be done without disrupting the normal system operation because each VM is isolated from the other one. The disadvantage of a VM system is that there is no direct sharing of the resources. Two approaches have been implemented for sharing the resource. First approach is sharing a minidisk which is accessed by the VM. With this technique files can be shared. The other approach is the network VM, wherein a system can send information over the VM communication network.
VMs are increasingly becoming popular as a means for solving compatibility problems. VMs now exist that allow Windows applications to run on Linux-based computers. The VM runs both the Windows application and the Windows OS. Java is one of the applications that runs on a VM there by allowing a Java program to run on any computer system that has a Java virtual machine.
An OS is a collection of program that acts as an intermediary between user and the computer hardware.
An OS performs various functions such as process management, I/O management, managing use of main memory and providing security to user jobs and files.
In a simple batch OS, to speed up processing, jobs with the same needs are batched together and executed as a group.
The primary objective of a multi-programmed OS is to maximize the number of programs executed by a computer in a specified period and keep CPU of the computer simultaneously busy.
A time-shared OS allows a number of users to simultaneously use a computer.
The primary objective of a Distributed System is to increase the throughput by getting more jobs done in less time.
A real time system is used in the control of physical systems. The response time of such system should match the needs of the physical system.
An OS is large. Therefore, it is critical to consider modularity while designing a system.
The layered-approach considers design of the systems as a sequence of layers.
The virtual machine concept takes the layered approach and treats the kernel of the OS and hardware as though they were all hardware.
1.6 Chapter Review Questions
The OS of a computer serves as a software interface between the user and the ________.
OSs that are used for scientific experiments, industrial control systems and medical imaging systems are:
Real Time Systems
Batch Processing Systems
Time Sharing Systems
All of the above
An OS manages ________.
Disk and I/O devices
All of the above
Which technique was introduced because a single job could not keep both CPU and I/O devices busy?
Real time systems are ________.
Primarily used on mainframe computers
Used for monitoring events as they occur
Used for program development
Used for real time interactive users
Which of the following is not the layer of an OS?
A process is defined as a:
program in execution.
a concurrent program.
any sequential program.
something which prevents deadlock.
In ______ OS, the response time is very critical.
In which kind of system identical copy of OS runs?
Asymmetric multiprocessing system
Symmetric multiprocessing system
Time sharing systems
Which systems are referred to as loosely coupled systems?
Simple Batch Processing