Implementation and investigation of virtualisation and load balancing

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Introduction

In today IT industry virtualization is one of the hottest technologies to impact computing and consolidate servers. Organisations with the large datacenters are looking into virtualization technologies to reduce their carbon footprint. A well-designed virtualization solution can save organizations money and reduction physical servers in the data centre.

With this technology, it is now much easier to

create testing, training, or development, and even production environments, and

turn them into malleable entities that respond to business needs when they appear.

Virtualization allows the simulation of hardware via software. For this to occur, some type of virtualization software is required on a physical machine. The most well known virtualization software in use today is VMware.

Among the leading business challenges confronting CIOs and

IT managers today are: cost-effective utilization of IT infrastructure;

responsiveness in supporting new business initiatives;

and flexibility in adapting to organizational changes.

Overview of Virtualization

Virtualization allows the simulation of hardware via software. Implementing this technology in the datacenter can take place at several levels; one of the common solutions is to use the guest operating system (OS) and gives the ability to expose physical resources to make them available to several different virtual machines at the same time.

Virtualization allows the simulation of hardware via software. For this to occur, some type

of virtualization software is required on a physical machine. The most well known

virtualization software in use today is VMware. VMware will simulate the hardware

resources of an x86 based computer, to create a fully functional virtual machine. An

operating system and associated applications can then be installed on this virtual machine,

just as would be done on a physical machine. Multiple virtual machines can be installed on

a single physical machine, as separate entities. This eliminates any interference between

the machines, each operating separately.

Guest OS virtualization technologies divided in two flavours. The first

is a software layer that is used to simulate a physical machine on top of

an existing operating system running on a hardware host. The second is a

hypervisor-a software engine that runs directly on top of the hardware,

eliminating the overhead of a secondary operating system.

Virtualization is the emulation of one of more workstations/servers within a single physical computer. Put simply, virtualization is the emulation of hardware within a software platform. This allows a single computer to take on the role of multiple computers. This type of virtualization is often referred to as full virtualization, allowing one physical computer to share its resources across a multitude of environments. This means that a single computer can essentially take the role of multiple computers.

There are many different types of virtualization, each for varying purposes.

Virtualization allows the simulation of hardware via software. For this to occur, some type of virtualization software is required on a physical machine. The most well known virtualization software in use today is VMware.

VMware will simulate the hardware resources of an x86 based computer, to create a fully functional virtual machine.

Although virtualization technology has been around for many years, it is only now beginning to be fully deployed. One of the reasons for this is the increase in processing

power and advances in hardware technology.

The History of Virtualization

Virtualization technology has been around for very long time. This technology was first introduced in the 1960s to partition large, mainframe hardware for better hardware utilization.

VMware invented virtualization for the x86 platform in the 1990s to address underutilization and other issues, overcoming many challenges in the process. Today, VMware is the global leader in x86 virtualization, with over 170,000 customers, including 100% of the Fortune 100.[1]

Virtualization is a proven concept that was first developed in the 1960s by IBM as a way to logically partition large, mainframe hardware into separate virtual machines. These partitions allowed mainframes to "multitask"; run multiple applications and processes at the same time. Since mainframes were expensive resources at the time, they were designed for partitioning as a way to fully leverage the investment.[3]

In the mid 1960s, the IBM Watson Research Center was home to the M44/44X Project, the goal being to evaluate the then emerging time sharing system concepts. The architecture was based on virtual machines: the main machine was an IBM 7044 (M44) and each virtual machine was an experimental image of the main machine (44X). The address space of a 44X was resident in the M44's memory hierarchy, implemented via virtual memory and multi-programming. [2]

IBM had provided an IBM 704 computer, a series of upgrades (such as to the 709, 7090, and 7094), and access to some of its system engineers to MIT in the 1950s. It was on IBM machines that the Compatible Time Sharing System (CTSS) was developed at MIT. The supervisor program of CTSS handled console I/O, scheduling of foreground and background (offline-initiated) jobs, temporary storage and recovery of programs during scheduled swapping, monitor of disk I/O, etc. The supervisor had direct control of all trap interrupts. [2]

Around the same time, IBM was building the 360 family of computers. MIT's Project MAC, founded in the fall of 1963, was a large and well-funded organization that later morphed into the MIT Laboratory for Computer Science. Project MAC's goals included the design and implementation of a better time sharing system based on ideas from CTSS. This research would lead to Multics, although IBM would lose the bid and General Electric's GE 645 would be used instead. [2]

Regardless of this "loss", IBM has been perhaps the most important force in this area. A number of IBM-based virtual machine systems were developed: the CP-40 (developed for a modified version of IBM 360/40), the CP-67 (developed for the IBM 360/67), the famous VM/370, and many more. Typically, IBM's virtual machines were identical "copies" of the underlying hardware. A component called the virtual machine monitor (VMM) ran directly on "real" hardware. Multiple virtual machines could then be created via the VMM, and each instance could run its own operating system. IBM's VM offerings of today are very respected and robust computing platforms. [2]

The concept of virtualization was first devised in the 1960s. It was then implemented by IBM to help split large mainframe machines into separate 'virtual machines'. The reason why this was done was to maximise their available mainframe computers efficiency. Before virtualization was introduced, a mainframe could only work on one process at a time, becoming a waste of resources. Virtualization was introduced to solve this problem. It worked by splitting up a mainframe machine's hardware resources into separate entities. Due to this fact, a single physical mainframe machine could now run multiple applications and processes at the same time.[4]

[1] http://www.vmware.com/virtualization/history.html

[2]

http://www.kernelthread.com/publications/virtualization/

[3]

http://www.aiosolutions.com/what_is_virtualization.php

[4]

Cloud Computing Virtualization Specialist Complete Certification Kit - Study Guide Book and Online Course Apr 2009

Objective of virtualisation

There are four main objectives to virtualization, demonstrating the value offered to

organizations:

• Increased use of hardware resources;

• Reduced management and resource costs;

• Improved business flexibility; and

• Improved security and reduced downtime.

Increased use of Hardware Resources

With improvements in technology, typical server hardware resources are not being used to

their full capacity. On average, only 5-15% of hardware resources are being utilized. One

of the goals of virtualization is to resolve this problem. By allowing a physical server to run

virtualization software, a server's resources are used much more efficiently. This can

greatly reduce both management and operating costs. For example, if an organization

used 5 different servers for 5 different services, instead of having 5 physical servers, these

servers could be run on a single physical server operating as virtual servers.

Reduced Management and Resource Costs

Due to the sheer number of physical servers/workstations in use today, most organizations

have to deal with issues such as space, power and cooling. Not only is this bad for the

environment but, due to the increase in power demands, the construction of more

buildings etc is also very costly for businesses. Using a virtualized infrastructure,

businesses can save large amounts of money because they require far fewer physical

machines.

Improved Business Flexibility

Whenever a business needs to expand its number of workstations or servers, it is often a lengthy and costly process. An organisation first has to make room for the physical location of the machines. The new machines then have to be ordered in, setup, etc. This is a time consuming process and wastes a business's resources both directly and indirectly. Virtual machines can be easily setup. There are no additional hardware costs, no need for extra physical space and no need to wait around. Virtual machine management software also makes it easier for administrators to setup virtual machines and control access to particular resources, etc.

Improved Security and Reduced Downtime

When a physical machine fails, usually all of its software content becomes inaccessible. All the content of that machine becomes unavailable and there is often some downtime to go along with this, until the problem is fixed. Virtual machines are separate entities from one another. Therefore if one of them fails or has a virus, they are completely isolated from all

the other software on that physical machine, including other virtual machines. This greatly increases security, because problems can be contained. Another great advantage of virtual machines is that they are not hardware dependent. What this means is that if a server fails due to a hardware fault, the virtual machines stored

on that particular server can be migrated to another server. Functionality can then resume as though nothing has happened, even though the original server may no longer be working.

Benefits of virtualisation technology

Virtualization provides the following benefits:

* Consolidation to reduce hardware cost:

o Virtualization enables you to efficiently access and manage resources to reduce operations and systems management costs while maintaining needed capacity.

o Virtualization enables you to have a single server function as multiple virtual servers.

* Optimization of workloads:

o Virtualization enables you to respond dynamically to the application needs of its users.

o Virtualization can increase the use of existing resources by enabling dynamic sharing of resource pools.

* IT flexibility and responsiveness:

o Virtualization enables you to have a single, consolidated view of, and easy access to, all available resources in the network, regardless of location.

o Virtualization enables you to reduce the management of your environment by providing emulation for compatibility, improved interoperability, and transparent change windows.

Virtualisation techniques

Virtualization can be performed in several ways, and it depends on the level of the needs that must be achieved. Commonly, Hardware-level virtualization, operating system level virtualization, Para-virtualization and Full virtualization are most common techniques used to achieve virtualisations.

While there are various ways to virtualize computing resources using a true VMM, they all have the same goal: to allow operating systems to run independently and in an isolated manner identical to when it is running directly on top of the hardware platform.

Hardware-level virtualization

Virtualization software that presents a virtual set of hardware to a guest operating system makes up

the majority of server virtualization products available. These virtualization products provide a VMM

that either partially or fully virtualizes the underlying hardware, allowing both modifi ed and

unmodifi ed guests to run in a safe and isolated fashion.

The most popular of these products, especially for the enterprise, are VMware, Microsoft, and

Xen, in order of commercial market share. VMware and Xen are the most mature of the group,

offering the richest set of features, such as live migration and bare-metal installation and execution of

the hypervisor, as well as the widest array of supported guest operating systems.

runs the virtualized operating system

on top of a software platform running directly on top of the hardware

without an existing operating system. The engine used to run hardware

virtualization is usually referred to as a hypervisor. The purpose of this

engine is to expose hardware resources to the virtualized operating systems.

The concept of hardware virtualization also emerged during this time,

allowing the virtual machine monitor to run virtual machines in an isolated

and protected environment. Because the virtual machine mon itor is

transparent to the software running in the virtual machine, the software thinks

that it has exclusive control of the hardware. The co ncept was perfected over

time so that eventually virtual machine monitors

operating system level virtualization

Most applications running on a server can easily share a machine with others, if they could be isolated and secured. Further, in most situations, different operating systems are not required on the same server, merely multiple instances of a single operating system. OS-level virtualization systems have been designed to provide the required isolation and security to run multiple applications or copies of the same OS (but different distributions of the OS) on the same server. OpenVZ, Virtuozzo, Linux-VServer, Solaris Zones and FreeBSD Jails are examples of OS-level virtualization.

Operating

system

-level virtualisation means that the virtual env ironment is being created within the host OS. There is no real guest OS because it is identical to the host OS sinee they share the same functions, libraries and applications. A specialized kernel is used to implement the isolated servers. Its task is to prevent one of the isolated servers to be able to read or write to memory or peripherals data that it is not supposed to access. The most prominent example of this is the I.inux VServer project and Virtuozzo OpenVZ by SWSoft An even simpler solution is to use FreeBSD jails with which a standard Linux user can be "jailed" to a certain directory or path Although those jails offer very good performance they are very limited to the type of application in which thev can be used. In most cases they do not offer the needed tlexibilitv of an independent server system.

Para-virtualization

This technique also requires a VMM, but most of its work is performed in the guest OS code, which in turn is modified to support this VMM and avoid unnecessary use of privileged instructions. The paravirtualization technique also enables running different OSs on a single server, but requires them to be ported, i.e. they should «know» they are running under the hypervisor. The paravirtualization approach is used by products such as Xen and UML.

Para-virtualization

Para-virtualization is the technique whereby the VMM and guest operating systems communicate by use of hypercalls. In this situation, nonvirtualizable privileged instructions are removed and replaced with hypercalls. These hypercall interfaces also handle other critical kernel operations such as interrupt handling and memory management. Para-virtualization differs from binary translation and full virtualization in that it requires modification of the guest operating system. It should be noted that in most cases, para-virtualization offers the best performance of the three virtualization options.

Paravirtualisation

is one of the latest technologies in the field of visualization. It offers great performance advantages over the existing virtualization techniques but as a drawback requires modifications of the guest OS. The guest OS needs to run specialized driv ers in order to access the AIM which the virtualization environment provides. The management is done by the use of a Hypcrvisor / Virtual machine monitor (VMMO). The Hypervisor controls all access to memory. CPU and I O The adv antage is that the guest OS is aware of the Ilypervisor and thus can interact with it much better. The negative side effects of context switches and cachc flushs can be minimized and the Ilypervisor can directly communicate with the guest OS. This makes scheduling much easier and results in a far better performance. The upcoming technologies in

CPU support for virlualization will greatly improve Paravirtualization as it will minimize or even supersede the need lor guest OS modifications.

Full virtualization

Virtual machines (VMs)

Virtual machines

emulate some real or fictional hardware, which in turn requires real resources from the host (the machine running the VMs). This approach, used by most system emulators, allows the emulator to run an arbitrary guest operating system without modifications because guest OS is not aware that it is not running on real hardware. The main issue with this approach is that some CPU instructions require additional privileges and may not be executed in user space thus requiring a virtual machines monitor (VMM) to analyze executed code and make it safe on-the-fly. Hardware emulation approach is used by VMware products, QEMU, Parallels and Microsoft Virtual Server.

For this several techniques can be used all with their special characteristics. Generally you can distinguish between software and hardware virtualization. Hardware virtuuligation is fairly simple: The existing hardware can be partitioned (e g hard disk, memory), which is a technique that has been used in IBM's LPAR. In addition to that hardware can support virtualization techniques As we will see later. x86 hardware is not made to be virtualizcd and this needs to be expunged by processor virtualization techniques (Intel's Vandcrpool and AMD's Pacilica processor models) that introduce necessary elements in order to improve software virtualization (c g. an associative TLB (Table lookaside buffer)) and elimination of performance bottlenecks of context switches

Software virtualization can be categorized into four main groups: emulation, native v irtualization. paravirtualization and operating system-level virtualization.

Emulation

is the oldest approach of virtualization and is sometimes referred to as "trap-and- cmulatc" or binary visualization. The virtual machine simply emulates the complete hardware allowing an unmodified operating system to be run in this environment. It may e\en expect a little endian processor although the host OS uses a big endian processor since all instructions are executed bv the Virtual machine and no direct calls to the hardware are being performed. The result of this is that emulation is very performance expensive and reduces the performance to a fractional amount of what the host OS could perform even if a single guest OS is given all resources that the host OS offers Examples for emulation are Bochs and Qemu6. Native (full) virtualmition is a technique which emulates a new PC with an abstraction layer (the so-called Virtual Machine Manager VMMA). There is a binary translation of guest OS code which is actually slow but with the use of special drivers in the guest OS and the limitation to the same type of CPU (in the host OS) some direct hardware access can be achieved which results in a better performance VMWare Workstations and ESX Server are the most popular products to make use of this technique. The performance is better than just emulation but still it can not compare to the performance of the host OS stand-alone performance since the guest OS can not take advantage of the hosts' optimized hardware.

In addition to these \ irtuali/ation techniques which refer to setting up real server virealization there are other techniques for specialized tasks:

Virtual machines in the application layer arc application-specific and can just be used in the context of the application. The java virtual machine (JVM ) as well as ' Net" from Microsoft arc the most common virtual machines. They execute byte code which has been generated by their compilers. The virtual machine is being ported to many Operating systems enabling pamrammcrs to write their code once and use it amwhere.

Desktop virealization can be used in order to acccss several virtualized desktops on a single server. It is also referred to as thin-client solution in which a "dumb' client just acts as a terminal for input output and all the instructions are being processed on the server. The level of visualization varies from simple access to a single PC (PCAnywhere, Remote desktop) in which just the location of the user is visualized to remote acccss to physical high-density rack servers (IBM blade centers) which otter terminal services. The most common solutions in this field arc the Terminal service from Microsoft as well as Citrix Meta frame*.

After this introduction into the history and basic fundamentals of visualization we will now analyze the fundamentals of server visualization like the hypcrvisor and the contribution of hardware to visualization in greater detail in chapter two. Chapter three will then lead us to an examination of available visualization software particularly with rcuard to the usage in our defined scenario. An in-depth explanation and examination of the selected software will follow in chapter 4 besides the definition of high availability itself and the introduction of our case study. In chapter 5 we will then show the results of the test ease and lead over to the summary and outlook in chapter 6.

Emergence of virtualization techniques

Operating systems for Intel architectures are written to run directly on the native hardware, so they naturally assume they fully own the computer's resources: The x86 architecture offers levels of privilege, and operating system code expects to run at the highest privilege level. This is fine when run as a native OS, but when virtualizing the x86 architecture, the guest OS runs at a lower-level privilege than the underlying VMM. This is necessary, as the VMM must be able to manage shared resources. There are also differing instruction semantics when instructions are run at a different privilege level compared to that of the native implementation. The difficulty in trapping these types of instructions and privilege instruction requests at runtime was the challenge that originally made the x86 architecture so difficult to virtualize. Then in 1998, VMware developed the binary translation method.

Since the adoption of binary translation, competing hypervisor companies have differentiated their wares by employing a range of techniques to address the same problem, each trying to leverage their solution in the marketplace. The problem with that is because there are no industry standards for VMMs, we now have three different options to choose from for handling sensitive and privileged instructions:

n Binary translation

n OS-assisted (also referred to as para-virtualization)

n Hardware-assisted or full virtualization

Binary translation

The binary translation technique was and still is the de facto method by virtue of the number of VMware copies around the world. Its principle is to translate the nonvirtualizable privileged instructions with new sequences at runtime while user instructions execute directly on the native hardware. This combination provides full virtualization as the guest operating system is decoupled from the underlying hardware by the VMM. This method requires no hardware assist or operating system assist. The main advantage of this approach was that it allowed virtualization to become possible on x86 platforms, something thought impossible prior to this technique. The main disadvantage to this approach is that it requires OS modification at runtime, which reduces performance compared to hardware-assist techniques.

Hardware-assisted virtualization

In contrast, hardware-assisted virtualization, such as Intel VT-x technology, has the advantage over traditional software techniques because Intel controls the CPU. By introducing a new CPU execution mode feature that allows the hypervisor to run in a root mode below the normal privilege levels, the previously described issues relating to privileged instructions are overcome. Early releases of this technology, however, were slow, making para-virtualized techniques seem more beneficial. However, we are now seeing hardware-assisted performance quickly approach near-native levels.

Memory management and device I/O is a key area where hardware-assisted techniques are helping hypervisor developers. Native operating systems expect to see all of the system's memory. To run multiple guest operating systems on a single system, another level of memory virtualization is required. This can be thought of as virtualizing the Memory Management Unit (MMU) to support the guest OS. The guest OS continues to control memory mapping within its OS but cannot see the full machine memory. It is the responsibility of the VMM to map guest physical memory to actual machine memory. When the guest OS changes its virtual memory mapping, the VMM updates the shadow pages to enable direct lookup. The disadvantage to MMU virtualization is that it creates some overhead for all virtualization techniques, which can cause a hit in performance. It is this area where Intel's VT-x technology is providing efficiency gains.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.