An Operating System(OS) is the most important software of the computer which manages the computer’s memory, processes, software and hardware. It allows the user to communicate with the computer without the knowledge of the computer language. Users can directly interact with the OS through a user interface such as a command line or a graphical user interface (GUI). There are a lot of tasks performed by the OS such as:
- Operating System determines which applications should run in what order, how much time should be allowed for each application before giving another application a turn to run.
- It manages the sharing of internal memory among multiple applications in the system.
- It handles the input and the output to and from attached hardware devices, such as hard disks, printers, and dial-up ports.
- It sends messages to each application or user or system operator about the status of operation and any errors that may occur.
- It can offload the management of batch jobs (for example, printing) so that the initiating application is freed from this work and no job is locked from working.
- On parallel processing computers, an OS can manage how to divide the program in such a way that it runs on more than one processor at a time.
There are a lot of OS available in the market, but the leading OS are MacOS, Microsoft Windows, Linux, Android etc.
A. Design principles underlying the operating system.
Symmetric Multiprocessor OS Design Considerations:
In this type of system, the kernel can execute on any processor and each processor does self-scheduling from the pool of available processes or threads. They manage the processor and other system resources, so the user may review the system in the same way as a multiprogramming uniprocessor system. This OS must provide all the functionalities of a multiprogramming system with additional features to accommodate multiple processors. The design includes:
- Simultaneous Concurrent Process or Threads: The kernel needs to be in such a way that it allows several processors to execute the same kernel code simultaneously. As multiple processors will be executing same or different parts of kernel, kernel tables and management structures must be managed properly to avoid data corruption or invalid operations.
- Scheduling: For the corruption of scheduler data structure to be avoided, any processor may perform scheduling. If kernel level multithreading is used, then multiple threads of same processor should schedule simultaneously on multiple processors.
- Synchronization: As multiple active processes have potential access to shared address spaces or shared I/O resources, effective synchronization should be provided carefully.
- Memory Management: The memory management on a multiprocessor must deal with the issues found on uniprocessor systems and the OS needs exploit the available hardware parallelism to achieve the best performance. On page replacement, the paging mechanism on different processors must be coordinated to enforce consistency when several processors share a page or segment.
- Reliability and Fault Tolerance: In case of processor failure, the OS should provide graceful degradation. The scheduler and other portions of the OS must recognize the loss of a processor and must restructure management tables accordingly.
Multicore OS Considerations:
For a Multicore OS also all the design considerations done for a Symmetric Multiprocessor like Simultaneous Concurrent Processes or Threads, Scheduling, Synchronization, Memory Management, Reliability and Fault Tolerance are required. But apart from that some additional considerations are required as follows:
- Parallelism within Applications: Most applications are subdivided into multiple tasks which can be executed parallelly which then are implemented as multiple processors each with multiple threads. The developer will have the difficulty in deciding how to split the application work into independently executable tasks. The developer must decide what pieces should be executed asynchronously or in parallel. The OS can efficiently allocate resources among the parallel tasks as defined by the developer.
- Virtual Machine Approach: The multicore system, the OS acts more like a hypervisor. The programs themselves take on many of the duties of resources management. The OS assigns application a processor and memory and the program itself using metadata generated by compiler would best know how to use these resources. In a well-protected multi-tasking environment, it was possible to assign resources solely to the executing task, shifting any other non-executing task to slower secondary memory, rolling back when its recalled at a task switch.
B. Major elements of process management:
The processor is the central component of the computer and is involved in everything a computer does. The computer consists of a series of machine code instructions which the processor executes one at a time. There are single core processors and multicore processors. The multicore processors are popular than single core processors and are more widely used. Emphasis on development of multicore processor has increased so that they execute multiple threads at the same time to increase the speed and performance of the system. The major elements of process management include:
This is a major element in process management. The efficiency with which the processes are assigned to the processor will affect the overall performance of the system. This is all about managing queues to minimize the delay of the processor’s time to make the most of it. The OS carries out 4 types of process scheduling:
Long-term or High-level scheduling: It determines which programs are admitted to the system for processing, and as such controls the degree of multiprogramming.
Medium-term scheduling: It performs the swapping function in which an inactive or blocked process may be swapped into virtual memory and placed in a suspend queue until it is needed again, or until space is available.
Short-term or Low-level scheduling: It determines which process should be executed next each time the current running process is halted. It ensures efficient utilization of the processor and to provide an acceptable response time to users.
I/O scheduling: It decides in which order the block I/O operations will be submitted to storage volumes.
Process states: The process states are the states in which the processor is being. There are 3 states of the processor namely Ready, Blocked and Running. However, for a better understanding the states have been increased to five where Ready Suspended, Blocked Suspended have been added for better understanding of the process state as shown in the figure below:
Ready: In the ready state, the process is ready to execute when a processor becomes available.
Blocked: In the blocked state, the process is waiting for a specific event to occur before it can proceed.
Running: In the running state, the process is currently being executed by a processor.
Ready Suspended: In this state, a ready process will be suspended due to unavailability.
Blocked Suspended: In this state, a blocked process is suspended due to unavailability.
- Process Control Blocks:
These contain all the information that the OS needs in order to manage a process. These contain the information like the process ID, the current state of the process, the number of the next program instruction to be executed, and the starting address of the process in memory. It also stores the contents of processor registers. It stores all the information when a process makes the transition from one state to another.
A thread is a sub process which is executed independently of the parent process. They execute independently but are managed by parent process and they share the same memory space. Most modern OS support threads which becomes basic unit of scheduling and execution.
C. Methods for inter-process communication:
Inter Process Communication is a mechanism which allows the processors to communicate with each other and synchronize their actions. This communication process is a cooperation between the processors and can communicate in two ways:
- Shared Memory: The communication in this method, the processors share some variable and it depends on how the programmer implements it.
- Message Passing: In this method the processors communicate without using any shared memory instead a communication link is established between the processors and they exchange messages using basic primitives like send (message, destination) or send(message) and receive (message, host) or receive(message).
A link has a capacity to determine the number of messages that can reside in it temporarily for which every link has a queue associated with it which can be either zero capacity or bounded capacity or of unbounded capacity. The implementation of the link depends on the situation, it can be either be a Direct communication link or an In-directed communication link.
Direct Communication links: These links are implemented when the processes use specific process identifier for the communication, but it is hard to identify the sender ahead of time.
For example: the print server.
In-directed Communication Link: This is done via a shred mailbox (port), which consists of queue of messages. The sender keeps the message in mailbox and receiver picks them up.
Message Passing through Exchanging the Messages:
- Direct message passing: In this type of message passing, the process which wants to communicate must explicitly name the recipient or sender communication. Here links are established automatically and is associated with exactly one pair of communicating processes, between each pair there exists exactly one link which may be unidirectional but is usually bi-directional.
- Indirect message passing: Messages transit through mailboxes and each mailbox has a unique id and processes can communicate only if they share a mailbox. Each pair of processes may share several communication links which may be unidirectional or bi-directional.
D. Major elements of memory management:
Memory Management manages the primary memory and moves processes back and forth between main memory and disk while execution. It keeps a track of every memory location. The major elements in this are:
- Process Address Space: It is the set of logical addresses that a process references in its code. OS takes care of mapping the logical addresses to physical addresses at the time of memory allocation. There are 3 types of addresses used namely Symbolic addresses, Relative addresses, Physical addresses.
- Static and Dynamic Loading: In static loading, the absolute program is loaded into memory for execution to start. In dynamic loading, dynamic routines of the library are stored on a disk in relocatable form and are loaded into memory only when they are needed by the program.
- Swapping: This process is done to help run multiple and big processes in parallel where a process is swapped temporarily out of main memory to secondary storage. Doing this the memory is made available for another process.
- Memory Allocation: The OS memory allocation uses two types of allocation: Single-partition allocation: In this type of allocation, relocation register scheme is used to protect user processes from each other, and from changing operating-system code and data. Multiple -partition allocation: In this the allocation main memory is divided into a number of fixed size partitions in which each partition contains only one process.
- Fragmentation: Fragmentation is a phenomenon where sometimes processes cannot be allocated to memory blocks due to small size and the memory remains unused. There are two types of fragmentation: External fragmentation where total memory space is enough to satisfy but is not contiguous which cannot be used and internal fragmentation where memory block assigned is bigger, some portion of memory is left unused which cannot be used by another process.
- Paging: Paging is an important part of implementing virtual memory. It is a memory management technique where process address space is broken into blocks of the same size called pages. When system allocates a frame to any given page, it translates this logical address into a physical address and create entry into the page table to be used throughout execution of the program. It avoids external fragmentation.
- Segmentation: It is a memory management technique where each job is divided into several segments of different sizes where each segment is different logical address space of a program. It acts same as paging, but the segments are of variable-length where as in paging pages are of fixed size. The OS maintains a segment map table for every process and has a list of free memory blocks along with segment numbers, their size and corresponding memory locations in the main memory.
E. Major elements of scheduling:
An OS uses a program scheduler to schedules the processes of computer system. Scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process based on strategy. It is an essential part of a Multiprogramming operating systems. These systems allow more than one process to be loaded into the executable memory at a time and the it process shares the CPU using time multiplexing. The elements of Scheduling are:
- The Processors: Processes refers to any kind of program running in the system. Those processes may be any request given to the system. These are the ones responsible for any actions in the system.
- The Tasks: Tasks refers to the computing actions referring to the system that needs to be fulfilled.
- The Processing Timings: The time required by any kind of processes is referred to as the processes timing. This timing is not constant or is not same for any processes.
- The Precedence Relations: The precedence relations define when processes start and stop executing relative to one another.
- Long Term Scheduler: It determines which programs are sent to the system for processing. The selection of processes from the queue is done and then loaded them into memory for execution. Process loads into the memory for CPU scheduling. It is also called a job scheduler.
- Short Term Scheduler: It is to increase system performance in accordance with the chosen set of criteria. It is the change from ready state to running state of process. It is also called as CPU scheduler.
- Medium Time Scheduler: It removes processes from the memory, reduces the degree of multiprogramming. It is in-charge of handling the swapped out-processes.
F. Major elements of file system handling:
File system is the way files are named and are logically stored, retrieved in a system. File system in every device controls how the data is stored and retrieved. The main components of the file system are:
- Space Management: File systems allocate space in multiple physical units. It is responsible for organizing files and directories, they also keep track of which areas of the media belong to which file and which are not being used.
- Filenames: It is used to identify a storage location in the file system. They have restrictions of what name to be given mostly. They help easily locate and remember the file.
- Field: field is a single piece of information.
- Record: It can be an image, text based or in electronic or physical format.
- File: It is the collection of records.
- Database: It is the collection of information that is organized so that it is easily accessed, managed and updated.
G. Methods for handling I/O functions:
The design of I/O software is that it should be device independent where there is a possibility to write programs that can access any I/O device without having to specify the device in advance. The methods of handling I/O functions are:
- Programmed I/O: the I/O instructions that are written in the computer program. Each data item transfer is initiated by an instruction in the program and the transfer is from a CPU register and memory. It requires constant monitoring by the CPU of all the devices.
- Interrupt- initiated I/O: The CPU can proceed for any other program execution meanwhile keeps the monitoring the devices. When it is determined that the device is ready for data transfer, then initiation of an interrupt request signal to the computer. When it detects an external interrupt signal, CPU stops the task that was already performing, branches to the service program to process the I/O transfer and then returns to the task it was originally performing before.
- Direct memory access (DMA): The data transfer between a fast storage media and memory unit takes place in this. In this DMA the CPU is usually idle and has no control over the memory buses. The DMA controller takes over buses to manage the transfer directly between the I/O devices and memory unit.
- Bus Request: DMA controller uses this. It is to request CPU to relinquish the control of buses.
- Bus Grant: It is to inform external DMA controller that buses are in high impedance state and the requesting DMA can take control of the buses. When DMA has taken control, it transfers the data.
- Burst Transfer: In this a block sequence is transferred in a continuous burst.
- Cyclic Stealing: In this transferring the DMA controller transfers one word at a time after which it must return the control of the buses to the CPU.
H. Major elements of the programming interface (what the programmer needs to know to use the system).
The major elements in programming are:
- Input: Input refers to getting the data and commands into the system. The input can be any information given by the user to the system to calculate.
- Output: Output refers to getting the results out of the system after a given input. It is the result of the input calculated in the system.
- Arithmetic: Arithmetic can be referred to as performing the mathematical calculations on the data in the system. Ant mathematical calculation required to do on input helps in getting an output.
- Conditional: Conditional is referred to as testing to see if a condition is true or false for any data in the system. It checks the result of the calculation to be correct or incorrect.
- Looping: Looping can be referred to as cycling through a set of instructions until some condition is met. While in a loop the same instructions are performed continuously until a condition satisfied.
- Advantages and disadvantages of this operating system including which environments this operating system works best.
Linux Operating System:
- Low Cost: Its usually few to download for the public to download. It is free of cost that can be downloaded anytime, but some companies offer paid support for their Linux distributions.
- Performance: It provides high performance on workstations, networks.
- Stability: Periodic rebooting is not necessary to maintain performance. It handles large number of users and does not hang-up.
- Flexibility: It is flexible to use for high performance applications, desktops applications, embedded applications.
- Security: It is very secure, less prominent to viruses.
- Understanding: It is not easy to understand it.
- Software: It has limited selection of software available.
- Ease: It is not as ease at use when compared to windows.
- Hardware: It does not support many hardware devices.
Linux OS is used by Corporate, Scientific, Academic organizations. It is used to power the development machines and serves at many companies like Google, Facebook, Twitter, NASA. It is best fit for the considerations of system security and reliability.
Windows Operating System:
- Ease: It has a lot of advancement in terms of software and usage. It is very easy to understand and use.
- Software: As it the most used OS, more number of software programs, games, utilities are available.
- Hardware: Almost all hardware manufacturers are supported by windows.
- Development: The development of windows-based applications is easier.
- Price: It is costlier when compared to Linux OS.
- Security: It is less secure than Linux OS.
- Reliability: It needs to be rebooted periodically or else the system may hang-up.
It is used by users interested in gaming, novice, business. It is mostly used by the general public a lot. It is best fit for gaming, ease of use situations.
- Inter Process Communication. (2018, September 06). Retrieved November 17, 2018, from https://www.geeksforgeeks.org/inter-process-communication/
- I/O Interface (Interrupt and DMA Mode). (2018, September 07). Retrieved November 17, 2018, from https://www.geeksforgeeks.org/io-interface-interrupt-dma-mode/
- Linux vs. Windows. (2018, January 24). Retrieved November 17, 2018, from https://www.computerhope.com/issues/ch000575.htm
- (n.d.). Retrieved November 17, 2018, from https://study.com/academy/lesson/5-basic-elements-of-programming.html
- Rouse, M. (n.d.). What is operating system (OS)? – Definition from WhatIs.com. Retrieved October 24, 2018, from https://whatis.techtarget.com/definition/operating-system-OS
- Wells, C. J. (2009, January 28). Retrieved November 17, 2018, from http://www.technologyuk.net/computing/operating-systems/process-management.shtml
- Stallings, W. (n.d.). Operating Systems Internals and design principles. Retrieved November 17, 2018, from http://dinus.ac.id/repository/docs/ajar/Operating_System.pdf
- Tutorialspoint.com. (n.d.). Operating System Memory Management. Retrieved November 17, 2018, from https://www.tutorialspoint.com/operating_system/os_memory_management.htm
- Tutorialspoint.com. (n.d.). Operating System – Process Scheduling. Retrieved November 17, 2018, from https://www.tutorialspoint.com/operating_system/os_process_scheduling.htm
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: