Software Component Of A Computer System Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

An operating system is a software component of a computer system that is responsible for the management of various activities of the computer and the sharing of computer resources, without an operating system computers will be useless. An operatin system hosts several application that runs on a computer and handles the operations of a computer hardware. The users and applications accesses the services offered by the operating system by means of system calls and application programing interface. Users interact with operating systems through command line interface (CLI) or graphical user interface (GUI). In summary the operating system enables users to interact with computer systems by acting as an interface between users or application programs and the computer hardware.

In the earlier computers there were no operating systems in computers, by the early 1960s commercial computer services and commercial computer machant started supplying the extensive apparatus for information of the development, execution of jobs, and schedulling on batch processing systems. With this advancement of commercial computer services we have come across a number of operating system softwares, today there are different types of operating systems softwares. In this document i will be discussing abt the comparison between windows server 2008, a variation of UNIX operating system (sun solaris) and a variation of Linux operating system (red hat ) in terms of:

Processor management

Memory management

File system management

I/O management

Application support.


Windows sever 2008 is one of microsoft windows server lines of operating systems, it was released officially on february 27, 2008, it is the successor to windows sever 2003 released five years earlier. I will be documentation on the the following headings:

Proccess management

Memory management

File system management

I/O managment

Application support.

Processor management: in Windows, processes are created by discrete steps which construct the container for a new program and the first thread; a fork ()like native PI exists,but only used compatibility, and processes are containers for the user-mide addresss space, a general handle mechanism for referencing kernel objects and threads; threads run in a process, and are the schedulable entities.

Memory management: windows virtual memory manager controls how memory is allocated and how paging is performed. The memory manager is designed to operate over a variety of platforms and uses page sizes ranging from 4 Kbytes to 64 Kbytes. Intel and AMD64 platforms have 4096 bytes per page and Intel platforms have 8192 bytes per page.

File system management: the developers of windows designed a new file system, the new technology file system (NTFS), that is intended to meet high-end requirments for workstations and servers. Some key features of NTFS are: recoverablity, security, large disk and large files, multiple data streams, journaling and compression and encryption.

I/O management: windows I/O manager works closely with four types of kernel components, they are : cache manager, file system drivers, network drivers, and hardware device drivers.

Cache manager: the cache manager handles file chaching for all file systems. It can increase and decrease the size of the cache devoted to a particular file as the amount of avaliable physical memory varies.

File system drivers: the I/O manager treats a file system as just another device driver and routes I/O request for the file system volume to the appropriate software driver.

Network drivers: windows includes integrated networking capabilities and support for remote files. The facilities are implemented as software drivers rather than part of the windows executive.

Hardware device drivers: these software drivers access the hardware register of the peripheral devices using entry point in the kernals hardware abstraction layer.


Solaris is a UNNIX operating system originally developed by Sun Microsystems. Solaris is known for its scalability especially on SPARC (from scalable processor architecture) systems, Solaris was first developed as proprietery software (privately owned ).

Solaris processes: the process is one of the fundamental abstracts of UNIX. Every object in UNIX is represented as either a file or a process. Processes are usually created with fork or a less intensive alternative such as fork1 or vfork. Fork duplicates the entire process context, while fork1 only duplicate the context of the calling thread.

Solaris like other Unix systemss provides two modes of operation: User mode and kernel (or system) mode. Kernel mode is more privilaged mode of operation. Processes can be executed in either mode, but user processes usually operates in user mode.

Memory management: UNIX is intended to be an independent machine, its memory management scheme will vary frome one system to the next. Earlier versions of UNIX simply used variable pertitioning with no virtual memory scheme, but current implimentations of UNIX and Solaris make use of paged virtual memory.

There are actually two separate memory management scheme, they are the paging system and kernal memory allocator. Paging system provides a virtual memory capability that allocates page frames in main memory to process and also to allocate page frames to disk block and disk I/O, a paged virtual memory scheme is less suited to managing the memory allocation for the kernel, a kernel memory allocator is used for the purpose of managing the memory allocations for the kernel.

File system management: in UNIX file system, six types of files are distinguished:

Regular, or ordinary: contains arbitrary data in zero or more data blocks. Regular files contain information entered in them by a user, an application program, or a system utility program. The file system does not impose any internal structure to a regular file but treats it as a stream of bytes.

Directory: contains a list of file names plus pointer to associated index nodes (is a control structure that contains the key information needed by the operating system for a perticular file). Directories are hierarchically organized, these are actually ordinary files with special protection privilages so that only the file system can write into them, while read access is available to user programs.

Specials: contains no data, but provides a mechanism to map physical devices to file names. The file names are used to access peripheral devices, such as terminals and printers.

Named pipes: a pipe is an interprocess communication facility. A pipe file buffers data recieved in its input so that a process that reads from pipe's output recieves the data on a first-in-first-out basis.

Links: in essence, a link is an alternative file name for an existing file.

Symbolic links: this is a data that contains the name of the file it is linked to.

I/O management: in UNIX each individual I/O device is associated with a special file, these are managed by the file systwm and are read and written in the same manner as user data files. This provides a clean, uniform interface to users and processes to read from or write to a device, read and write request are made for the special file associated with the device.

There are two types of I/O in UNIX: buffered and unbuffered. Buffered I/O passes through system buffers, while unbuffered I/O typically involves the direct memory access (DMA) facility, with the transfar taking place directly between the I/O module and the process I/O area.

A process that is performing unbuffered I/O is blocked in main memory and cannot be swapped out, this reduces the opportunities for swapping by bying up part of main memory, thus reduciing ovarall system performance. Also, the I/O device is tied up with the process for the duration of the transfer, making it unavaliable for other processes. Among the categories of devices recognized by UNIX are:

Disk drives

Tape drives


Communication lines


Application support:


Red hat enterprise is an enterprise platform well suited for a broad range of applications across the IT infrastructure, Red hat offers greater flexibility, efficiency and control. It works across a broad range of hardware architectures, hypervisors and cloud.

Process Management : A process is the basic context within which all user-requested activities is services within the operating system. Any application that runs on a Linux system is assigned a process ID or PID, this is a numerical representation to uniquely identify a process. This number may be used as a parameter in various function calls allowing processes to be manipulated, such as adjusting the process's priority or killing it altogether.

In most situations this information is only relevant to the system administrator who may have to debug or terminate processes by referencing the PID. Process management is a series of tasks a system administrator completes to monitor, manage, and maintain instances of running applications.

Types of Processes: There are generally two types of processes that runs on Linux, they are interactive processes and system process or Daemon. Interactive processes are those processes that are invoked by the user and can intaract with the user. Interactive processes can be classified into two, they are foreground and background processes. The foreground process is the process that the user is interacting with, and a background process is a process that is not currently able to receive input from its controlling terminal.

Memory management: Linux shares many of the characteristics of the memory management scheme of other UNIX implementations but has its own unique features. There are two main aspects of Linux memory management, they are process virtual memory and kernel memory allocation.

Virtual memory addressing: Linux makes use of a three-leve page table structure, which consist of the following types of tables:

Page directory: an active process has a single page directory that is the size of one page. Each entry in the page directory points to one page of the page middle directory. The page directory must be in the main memory for an active process.

Page middle directory: the page middle directory may span multiple pages. Each entry in page middle directory points to one page in the page table.

Page table: the page table may also span multiple pages, each page table entry refers to one virtual page of the process.

The Linux page table structure is platform independent and was designed to accommodate the 64-bit Alpha processor, which provides hardware support for three levels of paging.

Kernel memory allocation: the Linux kernel memory capability manages physical main memory page frames. Its primary function is to allocate and deallocate frames for particular uses.

The foundation of kernel memory allocation for Linux is the page allocation mechanism used for virtual memory management.

File system management: Linux includes a versatile and powerful file handeling facility, designed to support a wide veriety of file management system and structures. The approach taken by Linux is to make use of a virtual file system which presents a single, uniform file system user interface to user processes.

I/O management: Linux I/O kernel facility is very similar to that of other UNIX implementation, the Linux kernel associates a special file with each I/O device drivers.

Linux I/O uses a plug-in model, based on table routines to implement the standard device functions such as open, read, write and close.

Application support:



Windows server 2008

Linux operating system


A commersial OS with strong influences from VAX/VMS and requirements for compatibility with multiple OS personalities, such as DOS/windows, POSIX.

An open-source inplimentation of UNIX, focused on simplicity and efficiency. Runs on a very large range of processor architecture.

Process management

Processes are containers of the user-mode address space, a general handle mechanism for referencing kernel objects, and threads; threads run in a process and the schedulablr entities

Processes are both containers and schedulable entities; processes can share address space and system resources, making processes effectively usable as threads.

Memory management

Physical memory dynamically mapped into kernel address space as needed.

Up to 896MB physical memory statically mapped into kernel address space (32-bit), with rest dynamically mapped into a fixed 128MB of kernel addresses, whic can include non-contigious use.

File system management

The most common file system used in windows is NTFS, which has many advanced features related to security, encryption, compression and so on.

The most common file systems are Ext2, Ext3 and IBM's JFS journaling file system.

I/O management

I/O system is layered, using I/O request packets to represent each request and then pass the request through layers of drivers.

I/O uses a plug-in model, based on tables of routines to implement the standard device functions suc as open, read, write, close.

System structure

Modular core kernel, with explicit publishing of data structures and interfaces by component three layers:

Hardware abstraction layer manages processors, interrupt, DMA ,BIOS details.

Kernel layer manages CPU scheduling, interrupt, and synchronization

Executive layer implements the major OS funtion in fully a threaded, most preemitive environment.

Linux uses a monolithic kernel. Kernel codes and data is statically allocated to non-pageable memory


Processes are containers for the user-mode address space, a general handle mechanism for referencing kernel objects; threads run in a process, and are the scheduling entities

Processes are both containers and the schedulable entity processes can share address space and system resources, making processes effectivly usable as threads.


Virtualization can simply be defined as the simulation of a computer harware environment by a software. Virtualizaation is the creation of a virtual vision of sumthing such as an operating system, a server, a storage device or network resources. Virtualization softwares run s between computer harware and the operating system (Window, Mac OS, Linux), accepting input from the operating system (OS) and redirecting it to the appropriate hardware addresses.

Virtualization softwares also does the revarse, catching output from hardware and redirecting it to the appropriate places in the operating system. The good thing about virtualization software is that the operating system doesn't know or care if its running on physical hardware, or a virtual mechine that only exist in the memory space in the computer.

Virtualization enables users of personal computers to run Windows on Mac OS, or linux or Mac OS on personal computers (pc). It also enables application softwares written to run under Windows to run on a Mac computer, the advantages of virtualization includes the ability to run older programs under virtualized copies compatible with older operating systems to run on newer versions of operating system, and host multiple web sites on one physical server under different operating systems.

Types of virtualization: There are three major types of virtualization, they are:

Server virtualization, client (or desktop) virtualization, and storage virtualization. Virtualization represents an abstract from physical resources, all users of virtualization are centered around this concept.

Server virtualization:

serve virtualization is masking of server resources, including the number and identity of individual physical servers, processors, and operating systems from server users. The server administrator uses a software application to divide one physical server into multiple isolated virtual environmentts. The virtual environment are sometimes called virtual private servers, but they are also known as emulations.

There are three popular approaches to server virtualization they are: the virtual machine model, paravirtual machine model, and virtualization at the operating system layer.

Virtual machine models are based on the host/guest paradigm, each guest runs on a virtual imitation of the harware layer. This approach allows guest operating system to run without modifications, it also allows administrator to create guests that uses different operating systems. Vitual machines uses a hypervisor to coordinate instructions to the CPU, this hypervisor is called a virtual machine monitor. It validates all the guest -issued CPU instructions and manages any executed codes that requires addition privilages. VMware, Microsoft virtual server are examples of vitual machines that uses the virtual machine model.

Paravirtual machine model is also based on the host/guest paradigm and it uses a virtual machine monitor too. In the paravirtual machine model, the virtual machine monitor modifies the guest operating system codes. Like virtual machines, paravirtual machines are capable of running multiple operating systems. Xen and UML uses the paravirtual model.

Virtualization at the operating system level works differently, its not based on host/guest paradigm. In the OS level model, the host runs a single OS kernel as its coreand exports operating systems functionality to each of the guests. The distributed architecture eliminates sytem calls between layers, which helps in reducing CPU usage overhead. It also requires that each pertition remains strictly isolated from its neighbors so that failure or security breach in one partittion isnt able to affect any of the other partition. Virtuozzo and solaris zone bothe uses OS-level virtualization.

This is what a server virtualization looks like:

Client (or desktop) virtualization:

Client or desktop virtualization is the use of virtual machine to let multiple network subscribers maintain individualized desktop on a single, centrally located computer or server. The central machine may be at a residence, business or data center. Users maybe geographically scattered but all are connected to the central machine by a local area network (LAN), wide area network (WAN) or the internet.

Client (desktop) virtualization gives the advantages in which every computer operates as a completely self-contained unit with its own operating system, peripherals and application programs. Expenses are reduced because resources can be shared and allocated to users when needed, conflicts in softwares are minimized by reducing the total number of programs stored on any given machine.

Despite the sharing or resouces, users can customize and modify their own desktops to meet their specific needs. This way client (or desktop) virtualization offers improved flexibility compares with similar client/server paradigm. The limitations of this type of virtualization includes potential security risks if the network is not properly managed, and setting up and difficulty in running certain complex applications such as multimedia.

Client (desktop) virtualization.

Storage virtualization: storage virtualization is a concept and term used in computer science. Specifically, storage systems may use virtualization concept as a tool to enable better functionality and more advanced features within the storage system, a storage system is also known as a storage array or a filer. Storage systems utilize specialized software and hardware along with disk drives in other to provide very fast and reliable storage for computing and data processing.

Storage system can provide either block accessed storage or file accessed storage. Block access is typically delivered over fiber channels, SAS, FICON(fiber connection) or other protocols. While file access is often provided using NFS (network file system) or CIFS (common internet file system) protocols. Within the context of a storage system, there are two primary types of virtualizations that can occur, they are: Block virtualization and File virtualization.

Block virtualization: it is used in this context which refers to the separation of logical storage from physical storage, so that it may be accssed without regard to physical storage or heterogeneous structure. This sepration allowas administrators of the storage system greater flexibility in how they manage storage for end users.

File virtualization: file virtualization addresses the NAS (network attached storage) challenges by eliminating the dependencies between the data accessed at the file level and the location where the files are physically stored. This provides opportunities to optimize storage utilization, server consolidation and to perrform nondistruptive file migrations.


Pros of virtualization: virtualization provides an sustainable growth for companies, and it has a great number of benefits, some of which are as follows

Virtualization provides more efficient use of computer processing power.

It consumes less energy i.e less power backup resouces are required, because it runs on one physical server instead of several.

Hardware upgrades are on software level (memory, processor, or any controller) which end the endless hardware purchases and upgrades.

Less time to recover the whole operating system with safer, faster backups and restore.

Ability to use existing computers for remote desktop connection to virtual machines located in the main server

Faster server/client connection speed through a virtual switch.

Cons of virtualization:

Performance might be slower than actual physical devices at peak hours.

Hardware compatibility issues. Sometime harware device drivers and applications may have compatibility issues, for example Vmware virtual machines do not suppoty Firewire, and also there have been a few known issues with 3D hardware acceleration.

May not support some protocols, this is relevant to storage virtualization.

May have future threats, security threats are one of them.


This is a list of the major players in virtualization:

Vmware: Vmware is the big daddy of the field of virtualization softwares, it provides hardware emulation virtualization product called Vmware server and ESX server. Here are some of the features of Vmware.

Easy installation: it installs like an application, with simple wizard driven installation and virtual machine creation process.

Hardware support: it runs on any standard X86 hardware, which includes Intel, AMD harware virtualization assisted systems.

Operating system support: the broadcast operating system support of any host based virtualization platform currently avaliable, including support for windows server 2008, Red Hat enterprise Linux 5, Ubuntu 8.04

Independent virtual machince console

Support for virtual machine interface: this feature enables enables transparent paravirtualization, in which a single binary version of the operating system can run on either on native hardware or in a paravirtualized mode to improve perfomance in specific linux environment.

Virtual machine communication interface: it has support for fast and efficient communication between a virtual machine and the hos operating system and between two or more virtual machines on the same host.

Xen: Xen is a new open source contender, it provides a paravirtualization solution. Xen comes bundled wit most Linux distributions, these are some of the features of Xen.

Better performance and scalability

VGA primary graphics pass through support to a hardware virtual machine guest for high performance graphics using direct access to the graphics card GPU from the guest host.

Online resize of guest disk without reboot or shutdown

Memory page sharing and page to disc for hardware virtual machine guests.


As with any technology, virtualization technology has gone through the usual hype cycle, in summary it should apparent that virtualization is not just a server based concept, the technique can be applied across a broad range of computing which includes; entire machines on both the server and the desktop, applications, storage, and networking.

Beyond this core elements the future of virtualization is still being written. For some companies such as red hat and many of the storage vendors, virtualization is being pushed as a feature to complement their existing offerings.