This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
To effectively manage many processes the core of operating system makes use of what is known as interrupt. Interrupt is a mechanism used for implementing the multitasking concept. It's a signal from hardware or software to indicate the occurrence of the event. When one or more process running and at same time if user give another process then interrupt is occur.
A hardware interrupt occurs, when an I/O operation is completed such as reading some data into the computer from a tape drive.
In other term hardware interrupts are used by devices to communicate that they require attention from the operating system. Some common examples are a hard disk signaling that is has read a series of data blocks, or that a network device has processed a buffer containing network packets. Interrupts are also used for asynchronous events, such as the arrival of new data from an external network. Hardware interrupts are delivered directly to the CPU using a small network of interrupt management and routing devices.
Hardware interrupts are referenced by an interrupt number. These numbers are mapped back to the piece of hardware that created the interrupt. This enables the system to monitor which device created the interrupt and when it occurred. In most computer systems, interrupts are handled as quickly as possible. When an interrupt is received, any current activity is stopped and an interrupt handler is executed. The handler will preempt any other running programs and system activities, which can slow the entire system down, and create latencies. MRG Real-time modifies the way interrupts are handled in order to improve performance, and decrease latency.
A software interrupt occurs when an application program terminates or requests certain services from the operating system.
Software interrupt generated within a processor by executing an instruction. Software interrupt are often used to implemented system calls because they implemented a subroutine call with a CPU ring level change.
The timed interrupt is used when a certain event MUST happen at a given frequency.
An interrupt vector is the memory address of an interrupt handler, or an index into an array called an interrupt vector table or dispatch table. Interrupt vector tables contain the memory addresses of interrupt handlers. When an interrupt is generated, the processor saves its execution state via a context switch, and begins execution of the interrupt handler at the interrupt vector.
A Microkernel tries to run most services - like networking, file system, etc. All that's left to do for the kernel are basic services, like memory allocation, scheduling, and messaging (Inter Process Communication).
In theory, this concept makes the kernel more responsive (since much functionality resides in perceptible user-space threads and processes, removing the need for context-switching into the kernel proper), and improves the stability of the kernel by reducing the amount of code running in kernel space. There are also additional benefits for OS' that support multi-CPU computers (much simpler reentrancy protection and better suitability for asynchronous functionality) and distributed OS' (code can use services without knowing if the service provider is running on the same computer or not). A drawback is the amount of messaging and Context Switching involved, which makes microkernel's conceptually slower than monolithic kernels.
A modular kernel is an attempt to merge the good points of kernel-level drivers and third-party drivers. In a modular kernel, some part of the system core will be located in independent files called modules that can be added to the system at run time. Depending on the content of those modules, the goal can vary such as: only loading drivers if a device is actually found only load a file system if it gets actually requested, only load the code for a specific scheduling, security or any policy when it should be evaluated.
Modular and layered kernel compare and contrast
The modular kernel approach requires subsystems to interact with each other through
carefully constructed interfaces that are typically narrow (in terms of the functionality
that is exposed to external modules). The layered kernel approach is similar in that
respect. However, the layered kernel imposes a strict ordering of subsystems such that
subsystems at the lower layers are not allowed to invoke operations corresponding to
the upper-layer subsystems. There are no such restrictions in the modular-kernel
approach, wherein modules are free to invoke each other without any constraints.
What is a context switch?
Context switching occurs when one process temporarily discontinues execution and another process resumes execution in its place. Context switching is performed by the scheduler.
To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time the current running process has used. If it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from one process to another is called a context switch.
What actions are taken by a kernel to context switch?
Actions are taken by a kernel to context switch among threads.
The threads share a lot of resources with other peer threads belonging to the same process. So a context switch among threads for the same process is easy. It involves switch of register set, the program counter and the stack. It is relatively easy for the kernel to accomplish this task.
Actions are taken by a kernel to context switch among Processes.
Context switches among processes are expensive. Before a process can be switched its process control block (PCB) must be saved by the operating system. The PCB consists of the following information: The process state, The program counter, The values of the different registers, The CPU scheduling information for the process, Memory management information regarding the process, Possible accounting information for this process, I/O status information of the process.
When the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU. This is a heavy task and it takes a lot of time.
System calls are functions that a programmer can call to perform the services of the operating system.
Processes that run in user mode and how processes and libraries can cause execution in kernel mode. The interface between these two modes is provided by system calls. These are function calls that cause requests to be made to the kernel and the kernel to execute on behalf of those requests.
UNIX System Calls
System calls implemented by an operating system
User cannot execute privileged instructions.
Users must ask OS to execute them - system calls.
System calls are often implemented using traps.
OS gains control through trap, switches to supervisor model,
performs service, switches back to user mode, and gives control
back to user.
The dual mode operation provides us with the means for protecting the operating system from wrong users from one another. User mode and monitor mode are the two modes. Monitor mode is also called supervisor mode, system mode or privileged mode. Mode bit is attached to the hardware of the computer to indicate the current mode. Mode bit is '0' for monitor mode and '1' for user mode.
Application programming interface
An application program interface is the specific method prescribed by a computer operating system or by an application program by which a programmer writing an application program can make requests of the operating system or another application.
An application program interface can be contrasted with a graphical user interface or a command interface (both of which are direct user interfaces) as interfaces to an operating system or a program.
Network application programming interface (API)
The services that provide the interface between application and protocol software.
Different methods of passing data to the operating system
In a computer system having different memory address spaces, for example, user space and kernel space, a method and system is provided for communicating data. A data structure is defined in the kernel space to store data. The data structure is virtually mapped to an application in user space such that the application can access the data structure through virtual memory addresses. By directly accessing the data structure, data transfers between the address spaces using system calls and/or interrupts can be reduced.
What is Process Scheduling?
Process scheduling is a technique that is used when there are limited resources and many processes are competing for them; Multiprogramming tries to ensure that there is some process running at all times. This is done to utilize the CPU as much as possible.
In timesharing system, the CPU switches so frequently between jobs grey_loader
that the user does not feel that the machine is being shared by many processes or even many users.
What are the differences between short-term, medium-term, and long-term scheduling?
Long term scheduler determines which programs are admitted to the system for processing. It controls the degree of multiprogramming. Once admitted, a job becomes a process. Medium term scheduling is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are swapped out of real-memory until they are ready to execute. The swapping-in decision is based on memory-management criteria. Short term scheduler, also known as a dispatcher executes most frequently, and makes the finest-grained decision of which process should execute next. This scheduler is invoked whenever an event occurs. It may lead to interruption of one process by pre-emption.