Methodical Review Of Xen And Kvm Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Over the past few years, researchers have been propelled for a Utility Computing Model. Cloud computing allows delivering resource on demand by means of virtualization. Virtualization has been around from the period of Mainframe computing. The idea of using a computer system to emulate another, similar, computer system was early realized as useful for testing and resource utilization purposes. As with many computer technologies, IBM led the way with their VM system. In the last decade, VMware's software-only virtual machine monitor has been quite successful. More recently, open-source hypervisor's like Xen and KVM brought virtualization to the open source world, first with a variant termed para-virtualization and later using hardware assisted full virtualization. This paper surveys two main virtualization technologies: Xen and KVM.

A virtual machine (VM) is an abstract entity between hardware components and the end-user. A "real physical machine", sometimes depicted as "bare metal," such as memory, CPU, motherboard, and network interface. The real machine operating system accesses hardware parts by making calls through a low-level program called the BIOS (basic input/output system). Virtual machines are reposed on top of the real machine core parts. Abstraction entity called hypervisors or VMMs (virtual machine monitors) make calls from the virtual machine to the real machine. Hypervisors available today, use the real machine hardware parts, but allows different virtual machine operating systems and configurations. For example, a host system might run on SuSE Linux, and guest virtual machines might run Windows 2003 and Solaris 10.

Virtualization technology is the base in cloud computing. An efficient, flexible, trusted VMM is a basic requirement. So far, better and better solutions are available in CPU and memory virtualization. The performance of virtual machine is almost same as the native system, and intricacy is also improved. We review two main virtualization techniques in this paper; Xen and KVM. Current approaches to virtualization can be classified in three types, including full virtualization, paravirtualization, and software emulation. Each of them has its own pros and cons. Full virtualization works well but it is hard to put into practice. Paravirtualization is more efficient after lots of optimizations, but the Guest OS should be modified. It is not complicated to implement software emulation, but its performance is poor.

Figure 1 illustrates three approaches to virtualization. The shaded parts should be involved in VMM implementation. This session introduces three different virtualization approaches.

Figure 1: Three approaches to virtualization

Full Virtualization : In this model, developed by VMware, instead of emulating the processor, the virtual machine runs directly on the CPU. When privilege instructions are found the CPU will issue a trap that could be handled by the hypervisor and emulated. However there are a set of of x86 instructions that do not trap; for example pushf/popf. To manage these cases a technique called Binary Translation was developed. In this method the hypervisor glances over the virtual machine memory and intercepts these calls before they are carried out and dynamically rewrites the code in memory. The operating system kernel is incognizant of the change and operates normally. This combination of trap-and-execute and binary translation allows any x86 operating system to run unmodified upon the hypervisor. While this approach is complex to implement it yielded significant performance gains compared to full emulating the CPU.

Para Virtualization : Paravirtualization uses split drivers to handle I/O requests. A backend driver is installed in a privileged VM (Driver Domain) to access physical device. And it provides special virtual interfaces to other VMs for I/O accesses. A frontend driver is installed in Guest OS. The driver handles Guest's I/O requests and passes them to backend driver, which will interpret the I/O requests and map them to physical devices. Physical device drivers in Driver Domain will drive the devices to handle the requests.

Software Emulation : Software emulation is often used in host based VMM. Host based VMM is a normal application and it can't totally control hardware, so I/O requests should be handled by Host OS. I/O requests raised in Guest OS will be intercepted by VMM, and passed to an application in Host OS, which handles I/O requests via system call to Host OS. The main overhead in this approach is context switch, including switch between Guest OS and VMM, switch between kernel space (VMM) and user space (emulation application), and switch between emulation application and Host OS kernel.

Overview of Virtualization Implementations

Before our evaluation we first provide a brief overview of the two virtualization technologies: Xen, and KVM. Xen is the most accepted paravirtualization implementation in use today. Because of the paravirtualization, guests exist as independent operating systems. The guests typically exhibit minimal performance overhead, approximating near-native performance. Resource management exists primarily in the form of memory allocation, and CPU allocation. Xen file storage can exist as either a single file on the host file system (file backed storage), or in the form of partitions or logical volumes.

Kernel-based Virtual Machine (KVM) represents the latest generation of open source virtualization. KVM is implemented as a loadable kernel module that converts the Linux kernel into a bare metal hypervisor. KVM was designed after the advent of hardware assisted virtualization. So it did not have to implement features that were provided by hardware. The KVM hypervisor requires Intel VT-X or AMD-V enabled CPUs and leverages those features to virtualize the CPU. By requiring hardware support rather than optimizing with it if available, KVM was able to design an optimized hypervisor solution without requiring the supporting legacy hardware or requiring modifications to the guest operating system.

XEN: ARCHITECURE

While the software emulation and full-virtualization approaches focused on how to handle a privileged instruction executed in a virtual machine a different approach was taken by the open source Xen project. Instead of handling a privileged instruction the approach with paravirtualization is to modify the guest operating system running in the virtual machine and replace all the privileged instructions with direct calls into the hypervisor. In this model, the modified guest operating system is aware that it is running on a hypervisor and can cooperate with the hypervisor for improved scheduling and I/O, removing the need to emulate hardware devices such as network cards and disk controllers.

The Xen Hypervisor Platform is comprised of two components - the Xen hypervisor which is responsible for the core hypervisor activities such as CPU, memory virtualization, power management and scheduling of virtual machines. The Xen hypervisor loads a special, privileged virtual machine called Domain0 or dom0. This virtual machine has direct access to hardware and provides device drivers and I/O management for virtual machines. Each virtual machine, known as an unprivileged domain or domU, contains a modified Linux kernel that instead of communicating directly with hardware interfaces with Xen hypervisor which runs underneath everything and serves as an interface between the hardware and the VMs. CPU and memory access are handled directly by the Xen hypervisor but I/O is directed to domain 0. The Linux kernel includes "front end" devices for network and block I/O. Requests for I/O are passed to the "back end" process in domain 0 which manages the I/O. The overall system structure is illustrated in figure 2.

Figure 2 : Architecture of machine running Xen hypervisor [1]

In order to provide a secure operating environment the x86 architecture provide a mechanism for isolating user applications from the operating system using the notion of privilege levels. In this model the processor provides 4 privilege levels, also known as rings which are arranged in a hierarchical fashion from ring 0 to ring 3. Ring 0 is the most privileged with full access the hardware and is able to call privileged instructions. The operating system runs in ring 0 with the operating system kernel controlling access to the underlying hardware. Rings 1, 2 and 3 operate at a lower privilege level and are prevented from executing instructions reserved for the operating system. In commonly deployed operating systems such as Linux and Microsoft Windows the operating system runs in ring 0 and the user applications run in ring 3. Rings 1 and 2 are not used by modern commercial operating systems. This architecture ensures that an application running in ring 3 that is compromised cannot make privileged system calls, however a compromise in the operating system running in ring 0 hardware exposes applications running in the lower privileged levels.

In a virtualized environment the hypervisor must run at the most privileged level, controlling all hardware and system functions. In Xen, the virtual machines run in a lower privileged ring, typically in ring 1 while user space runs in ring 3.

Figure 3 : x86 processor privilege levels [17]

------------------------------------

CPU scheduling

Xen currently schedules domains according to the Borrowed Virtual Time (BVT) scheduling algorithm [2]. This particular algorithms is both work-conserving and has a special mechanism for low-latency wake-up (or dispatch) of a domain when it receives an event. Fast dispatch is particularly important to minimize the effect of virtualization on OS subsystems that are designed to run in a timely fashion; for example, TCP relies on the timely delivery of acknowledgments to correctly estimate network round-trip times. BVT provides low-latency dispatch by using virtual-time warping, a mechanism which temporarily violates `ideal' fair sharing to favor recently-woken domains. However, other scheduling algorithms could be trivially implemented over our generic scheduler abstraction. Per-domain scheduling parameters can be adjusted by management software running in Domain0.

Xen provides guest OSes with notions of real time, virtual time and wall-clock time [1]. Real time is expressed in nanoseconds passed since machine boot and is maintained to the accuracy of the processor's cycle counter and can be frequency-locked to an external time source (for example, via NTP). A domain's virtual time only advances while it is executing: this is typically used by the guest OS scheduler to ensure correct sharing of its timeslice between application processes. Finally, wall-clock time is specied as an offset to be added to the current real time. This allows the wall-clock time to be adjusted without affecting the forward progress of real time. Each guest OS can program a pair of alarm timers, one for real time and the other for virtual time. Guest OSes are expected to maintain internal timer queues and use the Xen-provided alarm timers to trigger the earliest timeout.

Memory Management

The initial memory allocation, or reservation, for each domain is specified at the time of its creation; memory is thus statically partitioned between domains, providing strong isolation. A maximum allowable reservation may also be specified: if memory pressure within a domain increases, it may then attempt to claim additional memory pages from Xen, up to this reservation limit. Conversely, if a domain wishes to save resources, perhaps to avoid incurring unnecessary costs, it can reduce its memory reservation by releasing memory pages back to Xen.

Xen domains use a balloon driver [1], which adjusts a domain's memory usage by passing memory pages back and forth between Xen and xen domains page allocator. Although we could modify Linux's memory-management routines directly, the balloon driver makes adjustments by using existing OS functions, thus simplifying the Linux porting effort. However, paravirtualization can be used to extend the capabilities of the balloon driver; for example, the out-of-memory handling mechanism in the guest OS can be modified to automatically alleviate memory pressure by requesting more memory from Xen.

Most operating systems assume that memory comprises at most a few large contiguous extents. Because Xen does not guarantee to allocate contiguous regions of memory, guest OSes will typically create for themselves the illusion of contiguous physical memory, even though their underlying allocation of hardware memory is sparse. Mapping from physical to hardware addresses is entirely the responsibility of the guest OS, which can simply maintain an array indexed by physical page frame number. Xen supports ef- ient hardware-to-physical mapping by providing a shared translation array that is directly readable by all domains - updates to this array are validated by Xen to ensure that the OS concerned owns the relevant hardware page frames.

Device I/O

Rather than emulating existing hardware devices, as is typically done in fully-virtualized environments, Xen exposes a set of clean and simple device abstractions. This allows an interface that is both efficient and satisfies requirements for protection and isolation. I/O data is transferred to and from each domain via Xen, using shared-memory, asynchronous bufferdescriptor rings. These provide a high-performance communication mechanism for passing buffer information vertically through the system, while allowing Xen to efficiently perform validation checks. Similar to hardware interrupts, Xen supports a lightweight event delivery mechanism which is used for sending asynchronous notifications to a domain. These notifications are made by updating a bitmap of pending event types and, optionally, by calling an event handler specified by the guest OS.

Storage

In Xen, only Domain0 has direct unchecked access to physical (IDE and SCSI) disks. All other domains access persistent storage through the abstraction of virtual block devices (VBDs) [1], which are created and configured by management software running within Domain0. Allowing Domain0 to manage the VBDs keeps the mechanisms within Xen very simple. A VBD comprises a list of extents with associated ownership and access control information, and is accessed via the I/O ring mechanism. A typical guest OS disk scheduling algorithm will reorder requests prior to enqueuing them on the ring in an attempt to reduce response time, and to apply differentiated service. A VBD thus appears to the guest OS somewhat like a SCSI disk. Xen services batches of requests from competing domains in a simple round-robin fashion; these are then passed to a standard elevator scheduler before reaching the disk hardware.

Security

Xen ensures a high level of security via a variety of methods/features like Guest isolation, privileged access, small code base and operating system separation. Guest Isolation safeguards every DomainU guest is isolated from other DomainU guests with no way to access each other's memory or networking connections. Privileged access ensures only the Domain0 or single purpose control guests are given the ability to communicate with the hardware via the hypervisor. The Xen hypervisor has a tiny code base which limits the areas for attack. In Xen, the hypervisor is separated from Domain0 and DomainU. So, Xen hypervisor cannot be used to attack an operating system.

KVM : Architecture

Kernel-based Virtual Machine (KVM) project represents the latest generation of open source virtualization. The goal of the project was to create a modern hypervisor that builds on the experience of previous generations of technologies and leverages the modern hardware available today. KVM is implemented as a loadable kernel module that converts the Linux kernel into a bare metal hypervisor. There are two key design principals that the KVM project adopted that have helped it mature rapidly into a stable and high performance hypervisor and overtake other open source hypervisors.

The KVM hypervisor requires Intel VT-X or AMD-V enabled CPUs and leverages those features to virtualize the CPU.

Figure 3 : KVM Architecture [17]

In the KVM architecture the virtual machine is implemented as regular Linux process, schedule by the standard Linux scheduler. In fact each virtual CPU appears as a regular Linux process. This allows KVM to benefit from all the features of the Linux kernel. Device emulation is handle by a modified version of QEMU that provides an emulated BIOS, PCI bus, USB bus and a standard set of devices such as IDE and SCSI disk controllers, network cards, etc.

CPU Scheduling

In the KVM model, a virtual machine is a Linux process. It is scheduled and managed by the standard Linux kernel. The Linux kernel includes a new advanced process scheduler called the completely fair scheduler (CFS) [16] to provide advanced process scheduling facilities based on experience gained from large system deployments. The CFS scheduler has been extended to include the CGroups (control groups) resource manager that allows processes, and in the case of KVM - virtual machines, to be given shares of the system resources such as memory, cpu and I/O. Unlike other virtual machine schedulers that give proportions of resources to a virtual machine based on weights, cgroups allow minimums to be set not just maximums, allowing guaranteed resources to a virtual machine but allowing the virtual machine to use more resources if available.

Memory Management

KVM inherits the powerful memory management features of Linux The memory of a virtual machine is stored as memory is for any other Linux process and can be swapped, backed by large pages for better performance, shared or backed by a disk file. KVM supports the latest memory virtualization features from CPU vendors with support for Intel's Extended Page Table (EPT) and AMD's Rapid Virtualization Indexing (RVI) to deliver reduced CPU utilization and higher throughput.

Memory page sharing is supported through a kernel feature called Kernel Same-page Merging(KSM) [17]. KSM scans the memory of each virtual machine and where virtual machines have identical memory pages KSM merges these into a single page that it shared between the virtual machines, storing only a single copy. If a guest attempts to change this shared page it will be given it's own private copy.

When consolidating many virtual machines onto a host there are many situations in which memory pages may be shared - for example unused memory within a Windows virtual machine, common DLLs, libraries, kernels or other objects common between virtual machines. With KSM more virtual machines can be consolidated on each host, reducing hardware costs and improving server utilization.

Device I/O

KVM supports hybrid virtualization where paravirtualized drivers are installed in the guest operating system to allow virtual machines to use an optimized I/O interface rather than emulated devices to deliver high performance I/O for network and block devices. The KVM hypervisor uses the VirtIO [15] standard developed by IBM and Red Hat in conjunction with the Linux community for paravirtualized drivers which is a hypervisor independent interface for building device drivers allowing the same set of device drivers to be used for multiple hypervisors, allowing for better guest interoperability. Today many hypervisors use proprietary interfaces for paravirtualized device drivers which mean that guest images are not portable between hypervisor platforms.

Storage

KVM is able to use any storage supported by Linux to store virtual machine images, including local disks with IDE, SCSI and SATA, Network Attached Storage (NAS) including NFS and SAMBA/CIFS or SAN with support for iSCSI and Fiber Channel. Multipath I/O may be used to improve storage throughput and to provide redundancy. KVM also supports virtual machine images on shared file systems such as the Global File System (GFS2) to allow virtual machine images to be shared between multiple hosts or shared using logical volumes. Disk images support thin provisioning allowing improved storage utilization by only allocating storage when it is required by the virtual machine rather than allocating the entire storage up front. The native disk format for KVM is QCOW2[19] which includes support for snapshots allowing multiple levels of snapshots, compression and encryption.

Security

Since a virtual machine is implemented as a Linux process it leverages the standard Linux security model to provide isolation and resource controls. Security-Enhanced Linux [20] provides strict resource isolation and confinement for processes running in the Linux kernel. The sVirt [21] project builds on Security-Enhanced Linux to provide an infrastructure to allow an administrator to define policies for virtual machine isolation. sVirt ensures that a virtual machines resources cannot be accessed by any other process (or virtual machine) and this can be extended by the administrator to define fine grained permissions. Security-Enhanced Linux and sVirt provide an infrastructure that provides a high level of security and isolation.

Qualitative comparison

An ideal VMM runs the guest at native speed. Different VMMs experience different trade-offs in their attempts to approach this idea. Following tables briefly describes results of the survey. Todd Deshane et.al, compared the performance of KVM and Xen against base linux. Their results are depicted in the following table.

Performance

Xen

KVM

CPU

0.999

0.993

Disk Write

0.855

0.934

Disk Read

0.852

0.994

Table 1 : Overall performance of Xen, and KVM based on base linux [13]

Xen

KVM

Type of Virtualization

Para - Virtualization

Full - virtualization

CPU Scheduling

Borrowed virtual time algorithm [2]

Completely fair scheduler [16]

Memory Management

Balloon driver [1]

Kernel Same - Page Merging [17]

I/O operation

bufferdescriptor rings [1]

VirtIO [15]

Disk Access

Virtual Block Device [1]

QCOW2 [19]

Network

bufferdescriptor rings [1]

VirtIO [15]

Table 2 : Comparison of implementation details of Xen and KVM

Xen

KVM

Type of Hypervisor

Standalone thin hypervisor

Linux kernel as hypervisor

De-privileging

Yes

No

Multiprocessor Guests

Yes

No

Live Migration

Yes

Yes

Hypervisor Complexity - Installation and Management

High

Low

Virtual Machine Complexity - Installation and Management

Low

Low

Security

High

High

Table 3 : Comparison of general features of Xen and KVM

In summary, full virtualization and para virtualization VMMs suffer different overheads. While para virtualization used in Xen requires careful engineering to ensure efficient execution of guest kernel code, full virtualization used in KVM delivers nearly native speed for anything that avoids an exit but levies a higher cost for the remaining exits.

Experience in Xen Implementation

The Xen installation is conducted in system with Intel i7 processor and 4GB DDR2 RAM. We used Debian 5.0 (lenny) as the primary operating system and Xen3.4.2 as the hypervisor to virtualize some adtional operating systems. Xen provides very good performance when virtualizing Linux distributions due to paravirtualization. The version we used can also virtualize certain unmodified guest operating systems on processors that support virtualization. In a Xen setup, the Xen hypervisor runs directly on the hardware (bare-metal). The first guest operating system (dom0) runs "on top" of Xen and has full access to the underlying hardware. Additional guests (domU) also run on top of the Xen hypervisor, but with limited access to the underlying hardware.

Converting an existing Debian 5.0 to a Xen dom0 requires installing the Xen hypervisor. Here, Debian system needs a kernel modification to support Xen hypervisor. There are two main choices when it comes to a dom0 kernel. The Xen kernel or a dom0 pv-ops kernel. The pv-ops kernel can run on bare-metal or under the Xen hypervisor. The pv-ops kernel is most likely to be included in the mainline Linux kernel soon for Xen support. Unfortunately, the pv-ops kernel will not work with binary graphics drivers provided by Nvidia. Since our test machine has an Nvidia graphics card, we need to use the standard Xen kernel. The standard Xen kernel is still version 2.6.18, however Andrew Lyon maintains forward ported patches for Gentoo, that can be used for Debian install.

To start Xen hypervisor implementation, firstly get the kernel sources and patches from www.kernel.org and code.google.com respectively. Then apply the xen patches to the unmodified kernel. Once patching is done, configure your modified kernel source by executing make menuconfig to enable Xen compatibility. After that compile and install it. Subsequent to kernel compilation and installation, modify Grub. Debian 5.0 ships with Grub 2. Debian's update-grub command won't recognize your new Xen kernel, but we can easily modify the /etc/grub.d/40_custom script to manually insert it into your Grub configuration. Now, a reboot is required to load Xen hypervisor.

Performance Evaluation of Xen Implementation

We now present the results of our performance analysis of Xen installation. We benchmarked system on a base x86 Xen modified Debian5.0 operating system. Xen Hypervisor tests were performed on Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz processor with gigabit ethernet, 4 GB RAM, and an 1000 GB 7200 RPM SATA hard disk. Virtual Machine test were performed on a single Virtual CPU of configuration, Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz with Ethernet, 128 MB RAM, 256 MB Swap Memory and 1GB of virtual harddisk.

Each system was tested for network performance using Netperf [],and system performance using UnixBench[]. These tests served as microbenchmarks, and proved useful in analyzing the scalability and performance of the distributed benchmarks.

Network Performance

Using the Netperf [22] network benchmark tool, we tested the network throughput of different communication strategy and compared it against the native results using fixed message size. All tests were performed multiple times.

Test No

Recv Socket Size (Bytes)

Send Socket Size

(Bytes)

Send Message Size

(Bytes)

Elapsed Time

(secs.)

Throughput 106 bits/sec

DomU to DomU - in same host

Dom 0 to Dom U - in different hosts

Dom 0 to Dom U - in same host

1

87380

16384

16384

10.00

11087.62

94.11

1943.84

2

87380

16384

16384

10.00

11080.05

94.11

22183.45

3

87380

16384

16384

10.00

11099.44

94.13

22322.10

4

87380

16384

16384

10.00

11050.47

94.14

22267.38

5

87380

16384

16384

10.00

11074.90

94.14

22331.33

Table 4 : Network throughput evaluation of Xen virtual machine

Examining Table, Xen was able to achieve maximum network performance in bulk data transfer, when the DomU belongs in the same host machine. If DomU's are in different host machines, throughput will decrease drastically.

System Performance

We tested system performance of physical machine which hosts hypervisor and virtual machine. The test was performed using UnixBench[23], which is a tool designed to assess, the performance of system when running a single task, the performance of system when running multiple tasks, and the gain from system's implementation of parallel processing. Benchmark used for physical machine was, 8 core of Intel Core i7CPU and 1 parallel process and for virtual machine was 1 core of Intel Core i7CPU and 1 parallel process.

Test

Unit

Time

Iters.

Baseline

Virtual Machine

Physical Machine - Hypervisor

Score

Index

Score

Index

Dhrystone 2 using register variables

lps

10.0 s

7

116700.0

14746538.6

1263.6

14949106.8

1281.0

Double-Precision Whetstone

MWIPS

10.0 s

7

55.0

2882.6

524.1

2883.7

524.3

Execl Throughput

lps

30.0 s

2

43.0

3162.3

735.4

2379.5

553.4

File Copy 1024 bufsize 2000 maxblocks

KBps

30.0 s

2

3960.0

725162.4

1831.2

346322.2

874.6

File Copy 256 bufsize 500 maxblocks

KBps

30.0 s

2

1655.0

189847.3

1147.1

89153.0

538.7

File Copy 4096 bufsize 8000 maxblocks

KBps

30.0 s

2

5800.0

935488.1

1612.9

993252.3

1712.5

Pipe Throughput

lps

10.0 s

7

12440.0

1187283.6

954.4

461744.2

371.2

Pipe-based Context Switching

lps

10.0 s

7

4000.0

173809.8

434.5

113899.7

284.7

Process Creation

lps

30.0 s

2

126.0

6679.3

530.1

5223.4

414.6

Shell Scripts (1 concurrent)

lpm

60.0 s

2

42.4

5770.8

1361.0

6308.8

1487.9

Shell Scripts (8 concurrent)

lpm

60.0 s

2

6.0

793.0

1321.6

1561.3

2602.2

System Call Overhead

lps

10.0 s

7

15000.0

1232328.3

821.6

1169804.5

779.9

System Benchmarks Index Score:

949.4

764.2

Table 5 : System Performance evaluation of Xen virtual machine and hypervisor

Examining the table, Xen virtual machine out performs a standalone physical machine in terms of systems performance.

Real World Tests

The synthetic tests above should paint a quite clear picture about the performance of Xen. But to investigate these data correlate to real world application performance we conducted application testing using lamp[24]. The lamp is a bundled package which consist apache web server, mysql database and filezilla FTP server [24]. We hosted our application in two types of operating system which is supported by Xen : Modified OS and Unmodified OS and evaluated the performance.

Test

Xen modified Guest Operating System

Unmodified Guest Operating System

Webpage load time - with standard html tags

0 sec

0 sec

Webpage loadtime - with "create table" sql query

0.0048 sec

0.0117 sec

FTP - Average Upload time (685 MB of data)

0.58 sec

1.01min

FTP - Average Upload speed (685 MB of data)

11.8 MB/s

10.7MB/s

FTP - Average Download time (685 MB of data)

0.58 sec

1.15min

FTP - Average Download speed (685 MB of data)

11.8 MB/s

8.5MB/s

Table 6 : Evaluation of Real world application tests conducted on Xen virtual machines

Looking at the table, Xen virtual machine was able to achieve the maximum performance, when VM operating system is modified for bypassing the hypervisor.

Conclusion

Virtualization can be used for a plenty of scopes of application. Since CPU manufacturer introduced facilities to build VMMs more efficiently, those can be run on the popular and widespread x86 architecture. KVM is an open source virtualization solution that leverages the CPU facilities to operate VMs using full virtualization. It allows to run various unmodified operating systems in several isolated virtual machines running on top of a host. KVM is designed as an kernel module, once loaded it turns linux into an VMM. Since the developers didn't want to reinvent the wheel, KVM relies on the mechanisms of the kernel to schedule computing power and benefits from the of the box driver support. But the memory management has been extended to be capable to manage memory that is assigned to the address space of a VM. Also the virtIO device model supported by KVM, allows to heavily increase the I/O performance.

Xen is an open source virtualization solution that uses para virtualization to operate VMs. It allows to run various modified operating systems in several isolated virtual machines running on top of a thin hypervisor. Xen uses Borrowed Virtual time to schedule CPU. In Xen, the modified guest operating system is aware that it is running on a hypervisor and can cooperate with the hypervisor for improved scheduling and I/O, removing the need to emulate hardware devices such as network cards and disk controllers.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.