Virtualisation provides a layer

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

1 Introduction

Virtualisation provides a layer of abstraction between a user and a physical resource in a way that preserves for the user the illusion that she is actually interacting directly with the physical resource (Killalea 2008). While virtualisation is a type of abstraction, they are not the same thing. Unlike abstraction, virtualisation does not necessarily aim to simplify or hide details (Smith and Nair 2005). Computer system virtualisation is a case in point and involves the creation of "an efficient, isolated duplicate of a real machine" (Popek and Goldberg 1974). Like most abstractions, the principal benefit of machine virtualisation is the decoupling it facilitates; the computer that the user will interact with - the virtual computer - is fully decoupled from the actual computer system underneath it. Computer virtualisation is also referred to as 'hardware', 'platform', 'machine' or 'system' virtualisation (VMWare 2007). The central idea of hardware virtualisation is simple - software is used to create a virtual machine (VM) that emulates a physical computer. This creates a separate operating environment that is logically isolated from the physical host machine. By providing multiple VMs at once, this approach allows several operating environments to run simultaneously on a single physical machine. This can be particularly advantageous in the server (backroom) environment. Rather than paying for many under-utilised physical machines (servers), each dedicated to a specific workload, server virtualisation allows for the consolidation of those workloads onto a smaller number of more fully-utilised virtual machines. This implies cost savings in management, physical housing of (less) equipment, and electricity consumption (Microsoft 2008; VMWare 2008). Server virtualisation is also desirable from a systems management and availability perspective. It makes the restoration of a failed system easier, for instance. Because virtual machines are stored as files, restoring a failed system can be as simple as copying its file onto a new physical host. Importantly too, because virtual machines can have separate hardware configurations from the physical machine on which they're running, virtual machines can be restored to any available physical machine.

In this review paper, the author will use the term 'system virtualisation' to refer to the process of creating and deploying a virtual computer system, and the term 'virtual machine' (consisting of virtual hardware and software) to refer to the virtual computer system itself. While computer virtualisation has existed for over thirty years, it was only supported on a limited set of computer system architectures, primarily the IBM System/360 mainframe and its derivatives (Vogels 2008). The phenomenon of system virtualisation on mainstream central processing units (CPU) i.e. on x86-based machines is relatively new; since circa 1998 (Rosenblum and Garfinkel 2005; Adams and Agesen 2006; Killalea 2008). The x86 architecture is ubiquitous among desktop and notebook computers, as well as a growing majority of server computers, and is the most commercially successful CPU architecture in the history of personal computing. The widely installed base of such systems has certainly influenced the interest in and adoption of modern virtualisation. This increased interest is attributed to the sheer size of the x86 installed base, and the intrinsic demand for virtualisation such critical mass creates (Rosenblum and Garfinkel 2005; Fong and Steinder 2008).

2 Application of Virtualisation Technology

The principal application of system virtualisation is in the consolidation of computer hardware to simplify IT operations and total cost of ownership (Fong and Steinder 2008). Research has found that typical corporate adoption of virtualisation also includes its application to test & development, high availability, disaster recovery, and capacity planning (Gilen, Waldman et al. 2006; Killalea 2008). System virtualisation allows the resources of a physical system to be multiplexed over a number of virtual machines so as to reduce the amount of hardware in use, thus reducing the required computer room / data centre footprint, and indirectly reducing power consumption (Vogels 2008). A special software layer called a virtual machine monitor (VMM), or hypervisor, is used to virtualise the actual hardware (Smith and Nair 2005), and enables multiple virtual machines, of similar or differing configurations and operating environments to co-exist on the same physical computer, in complete isolation from one another. Because virtual machines are isolated from each other, and because the virtualisation process creates virtual hardware ("hardware" that is in reality a software artefact), which is inherently more flexible and portable than real hardware, business solution development efforts may be eased and the timeliness of solution deployment can be improved upon (Fong and Steinder 2008).

Fong and Steinder (2008) also note the "duality of virtualization: simplification and complexity". The simplification aspect (isolated, efficient, virtual hardware) offers unprecedented innovative opportunities to adopters of the technology, but the increased complexity introduced (virtual machine "sprawl", software licensing issues, performance concerns, and security concerns, for instance) needs to be appropriately addressed. IDC's technology assessment report (Gilen, Waldman et al. 2006) also notes the great potential (and duality) of virtualisation technology, and heralds the year 2006 as pivotal in the shift toward mass virtualisation:

There are precious few technologies that really create something remotely close to the overused term paradigm shift, and it's even more rare when those technologies actually sweep across the industry in a few short years. Virtualization of system resources aboard x86 servers is one technology that has the distinct potential to really change the dynamics of the industry in many ways as it quickly becomes a de facto component of modern computing. (Gilen, Waldman et al. 2006)

The report advises that with change and challenge comes opportunity and that virtualisation should be recognised as a disruptive force that will significantly change "the rules of the game" (op. cit). Vogels concurs with this analysis and suggests that we look beyond mere hardware consolidation and the use of virtualisation as a traditional cost-saving tool (Vogels 2008). Virtualisation is a strategic enabling technology he suggests, for application deployment, management and scalability.

2.1 Virtualisation Products / Marketplace

The principal players in the virtualisation marketplace are all commercial software companies; namely, Citrix, Microsoft and VMWare Inc. While hardware-centric system virtualisation possibilities exist, industry and commerce clearly favour the software-centric alternatives offered by these companies in the form of virtualisation software products (Gilen, Waldman et al. 2006; Vogels 2008). Virtualisation at a software layer (whether it is at the system software or application software layer) is inherently faster to deploy and more flexible to manage than a hardware-based alternative. In commercial applications, choosing the 'right' virtualisation technology usually means minimising its computational overhead. Fewer software 'layers' sitting between the computer user / application and the actual computer hardware, result in a better computational performance in the machine - fewer software layers means less computational overhead. Accordingly, the main virtualisation product offerings for industrial users employ a thin software layer (known as a hypervisor or virtual machine monitor), which runs directly on the hardware and provides an abstraction layer that in turn allows for the creation and deployment of one or more virtual machines. The role of the hypervisor is to break the 'hard-coding' that normally locks a machine's operating system and applications to the underlying physical hardware.

The hypervisor is in fact a highly customized and streamlined operating system in its own right, designed to abstract the underlying hardware from the virtual machines that will sit on top of the hypervisor. This low-level, software-centric approach to system virtualisation is depicted in figure 1 below.

Figure 1- Hypervisor-based computer system virtualisation

Importantly, the hypervisor replaces the main operating system of the physical machine (rather than installing on top of an existing operating system), and as such is relatively efficient in that one software layer is replacing, not supplementing, another software layer. This is the architecture used by Citrix's XenServer product (Citrix 2008) and by VMWare's Infrastructure product (VMWare 2008). Microsoft's product offering within this architectural category is a product called Windows Server 2008 Hyper-V (Microsoft 2008), and is a relative late entrant into this technology space. Because the hypervisor replaces the operating system, deploying products such as XenServer and VMWare Infrastructure and Hyper-V requires a customisation of the physical machine i.e. if one wished to deploy such products on existing hardware, the current configuration of the machine(s) (operating system, installed applications, user data etc.) would need to be removed. As such, this type of virtualisation requires some planning and is best suited to new installs, or where the time and effort in uninstalling and reinstalling operating systems and applications, and backing up and restoring user data, can be tolerated. In practice, these products tend to be used as 'back-end' server solutions (Vogels 2008) due to the customisations required.

Another architecture of system virtualisation products exist that make it possible to create and run multiple virtual machines on a desktop or laptop computer (typical end-user computing devices), without the need for a complete reconfiguration of the machine. Under this architecture, the virtualisation software layer is installed on top of the existing operating system infrastructure. The virtualisation software is simply an installed application on the machine. When the virtualisation application is running, virtual machines can be created and deployed. This higher-level approach to system virtualisation is depicted in figure 2 below..

Figure 2 - 'Hosted' computer system virtualisation

Save for the installation of the virtualisation software itself, which is negligible, there is no reconfiguration or advanced customisation of the host machine required. This approach to virtualisation is suited to situations where a reconfiguration of existing hardware is not possible for technical and / or operational reasons. Actual products in this category of virtualisation solutions include the Workstation, Fusion, and Server products from VMWare Inc., VirtuaBox from Sun Microsystems Inc., and Virtual PC from Microsoft Inc. This type of virtualisation is often referred to as hosted virtualisation (Gilen, Waldman et al. 2006; VMWare 2008). Hosted virtualisation solutions, while quicker and easier to realise, do not provide the same level of performance as their hypervisor-based alternatives. Because hypervisor-based virtual machines are 'closer to the metal' (fewer software layers between the VM and the actual hardware) their performance in terms of program execution is better.

Hypervisor-based virtualisation solutions could certainly be deployed in the learning environment. A cost-benefit analysis of faster (Hypervisor-grade) performance v's increased software, installation and maintenance costs is an appropriate instrument for reaching a decision on this. Where virtual machine performance is not the critical consideration; as is the case in the author's research laboratory and classroom for instance, hosted virtualisation represents a low cost / low maintenance path toward virtual machine deployment in the learning environment. Virtual machine performance remains one of the principal research foci in the virtualisation research space (Popek and Goldberg 1974; Rosenblum and Garfinkel 2005; Chadha, Illiikkal et al. 2007; Gaspar, Langevin et al. 2008).

3 Current Virtualisation Research Foci

Performance of virtualised platforms / systems is a prominent research focus since the emergence of virtualisation enabled hardware, such as the latest generation AMD and Intel processors. Recent experimental studies indicate that input-output (I/O) virtualisation; a critical aspect of all client and server virtualisation instances can now achieve up-to 98% of the native (i.e. non virtualised) performance (Dong, Dai et al. 2009). This slower than native performance is explained by the additional layers of processing in a virtualised environment of, for example, network packets; a typical source of significant I/O activity for a system. In terms of overall system performance, research indicates that overhead is much higher with I/O intensive workloads compared to those that are compute-intensive. (Apparao, Makineni et al. 2006; Chadha, Illiikkal et al. 2007). This is a challenge too for vendors of virtualisation products. VMWare Inc. are active in the commercial research space, and white papers from the company indicate a similar research focus to those cited above; namely performance optimisations for virtualised workloads, and an endeavour toward near-native virtual machine performance through their "bare-metal" hypervisor (VMWare 2008).

Virtualisation is increasingly of interest to the "green computing" community, because of its potential to reduce carbon emissions through hardware efficiencies (less deployment of new hardware; improved utilisation of existing hardware), and through "footprint" efficiencies (smaller data centres, reduced cooling requirements) as well as savings on disposal at end of lifetime (Nathuji and Schwan 2007; Kurp 2008; Chetty, Brush et al. 2009; Pedram 2009; Talebi and Way 2009). While the possible negative environmental impacts of pervasive technology, particularly in terms of physical waste and energy consumption is also considered in the literature (Jain and Wullert 2002; Przybyla and Pegah 2007), Talebi and Way (2009) write of the need for an "environmentally conscious use of technology" and note the contributing potential of hardware virtualisation technologies in that endeavour. The popularity of virtual machines in data centre environments because of reliability, flexibility, and ease of management features is the focus of the GreenCloud architecture (Liang, Hao et al. 2009), which aims to reduce data centre power consumption, whilst still providing virtualisation performance features including online monitoring, live virtual machine migration, and VM placement optimisation (op. cit). Results of evaluation tests indicate a saving of up to 27% in terms of energy consumption when applying GreenCloud architecture (op. cit).

The increased use of virtual machines also introduces new challenges for systems management. These include host operating system deployment challenges, and challenges in the deployment of virtual overlay networks to connect virtual machines - 6 - in complex environments (Rosenblum and Garfinkel 2005; Vallee, Naughton et al. 2007). Additionally, there is the problem of machine definition and deployment, which is complicated by differentiation in the underlying virtualisation technology (op cit). These new management challenges are being addressed by initiatives in systems management software; initiatives that are virtualisation aware. vManage, a platform and virtualisation management tool is a case in point (Kumar, Talwar et al. 2009). The tool coordinates platform management tasks (power and thermal management, for example) and virtualisation management tasks (virtual machine provisioning and application performance tuning, for example). Coordinating the actions taken by these different management layers bestows performance, stability, and efficiency benefits (op. cit). Virtualisation also opens up new possibilities in systems management. VMWare's vMotion product is a case in point (VMWare 2008). The vMotion product provides for live migration of virtual machines i.e. the ability to transfer an entire virtual machine, in a running state, from one physical server to another, without impacting end users. Hardware maintenance and upgrades do not require any system downtime (from the users' perspective) as a result. This is an example of the new possibilities that server virtualisation opens up for data centre management. The availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines is also considered in the literature (Vallee, Naughton et al. 2007; Soror, Minhas et al. 2008; Steinder, Whalley et al. 2008). Such initiatives facilitate more powerful and flexible autonomic controls, through management software that maintains the system in a desired state in the face of changing workload and demand (Nathuji and Schwan 2007; Fong and Steinder 2008; Steinder, Whalley et al. 2008).


  • Adams, K. and O. Agesen (2006). A comparison of software and hardware techniques for x86 virtualization. Proceedings of the 12th international conference on Architectural support for programming languages and operating systems. San Jose, California, USA, ACM.
  • Apparao, P., S. Makineni, et al. (2006). Characterization of network processing overheads in Xen. Proceedings of the 2nd International Workshop on Virtualization Technology in Distributed Computing, IEEE Computer Society.
  • Chadha, V., R. Illiikkal, et al. (2007). I/O processing in a virtualized platform: a simulation-driven approach. Proceedings of the 3rd international conference on Virtual execution environments. San Diego, California, USA, ACM.
  • Chetty, M., A. J. B. Brush, et al. (2009). It's not easy being green: understanding home computer power management. Proceedings of the 27th international conference on Human factors in computing systems. Boston, MA, USA, ACM.
  • Citrix. (2008). "Citrix XenServer Product Details." from Last Accessed: June 12th, 2008.
  • Dong, Y., J. Dai, et al. (2009). Towards high-quality I/O virtualization. Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference. Haifa, Israel, ACM.
  • Fong, L. and M. Steinder (2008). Duality of virtualization: simplification and complexity. ACM SIGOPS Operating Systems Review, ACM. 42: 96-97.
  • Gaspar, A., S. Langevin, et al. (2008). March of the (virtual) machines: past, present, and future milestones in the adoption of virtualization in computing education, Consortium for Computing Sciences in Colleges. 23: 123-132.
  • Gilen, A., B. Waldman, et al. (2006). "Technology Assessment: The Impact of Virtualization Software on Operating Environments. A research report prepared by IDC."
  • Jain, R. and J. Wullert (2002). Challenges: environmental design for pervasive computing systems. Proceedings of the 8th annual international conference on Mobile computing and networking. Atlanta, Georgia, USA, ACM.
  • Killalea, T. (2008). Meet the virts. ACM Queue - architecting tomorrows computing, Association for Computing Machinery. 6: 14-18.
  • Kumar, S., V. Talwar, et al. (2009). vManage: loosely coupled platform and virtualization management in data centers. Proceedings of the 6th international conference on Autonomic computing. Barcelona, Spain, ACM.
  • Kurp, P. (2008). Green computing. Communications of the ACM, ACM. 51: 11-13.
  • Liang, L., W. Hao, et al. (2009). GreenCloud: a new architecture for green data center. Proceedings of the 6th international conference industry session on Autonomic computing and communications industry session. Barcelona, Spain, ACM.
  • Microsoft (2008). "Windows Server 2008 Product Information Page. (Last accessed: June 12th, 2008)."
  • Nathuji, R. and K. Schwan (2007). VirtualPower: coordinated power management in virtualized enterprise systems. Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles. Stevenson, Washington, USA, ACM.
  • Pedram, M. (2009). Green computing: reducing energy cost and carbon footprint of information processing systems. Proceedings of the 19th ACM Great Lakes symposium on VLSI. Boston Area, MA, USA, ACM.
  • Popek, G. J. and R. P. Goldberg (1974). Formal requirements for virtualizable third generation architectures. Communications of the ACM, ACM. 17 (7): 412- 421.
  • Przybyla, D. and M. Pegah (2007). Dealing with the veiled devil: eco-responsible computing strategy. Proceedings of the 35th annual ACM SIGUCCS conference on User services. Orlando, Florida, USA, ACM.
  • Rosenblum, M. and T. Garfinkel (2005). "Virtual machine monitors: current technology and future trends." IEEE Computer 38(5): 39-47.
  • Smith, J. E. and R. Nair (2005). The Architecture of Virtual Machines. IEEE Computer. 38 (5): pp 32 - 38.
  • Soror, A. A., U. F. Minhas, et al. (2008). Automatic virtual machine configuration for database workloads. Proceedings of the 2008 ACM SIGMOD international conference on Management of data. Vancouver, Canada, ACM.
  • Steinder, M., I. Whalley, et al. (2008). Server virtualization in autonomic management of heterogeneous workloads, ACM. 42: 94-95.
  • Talebi, M. and T. Way (2009). Methods, metrics and motivation for a green computer science program. Proceedings of the 40th ACM technical symposium on Computer science education. Chattanooga, TN, USA, ACM.
  • Vallee, G., T. Naughton, et al. (2007). System management software for virtual environments. Proceedings of the 4th international conference on Computing frontiers. Ischia, Italy, ACM.
  • VMWare. (2007). "Virtual Appliance Marketplace:"
  • VMWare (2008). "VMWare Infrastructure 3 - Product Information. (Last Accessed: June 12th, 2008.)."
  • Vogels, W. (2008). Beyond server consolidation. ACM Queue - architecting tomorrows computing, Association for Computing Machinery. 6: 20-26.