2 VIRTUALIZATION AND
CONSOLIDATION
System (or hardware) virtualization creates an ab-
straction layer on top of the physical machine (host)
which allows one or more VMs (guests) to run sepa-
rately from the underlying hardware resources. Cur-
rent virtualization solutions rely on the concept of vir-
tual machine monitor (VMM), or hypervisor, that is in
charge of virtualizing the hardware and executing the
VMs, acting as a firewall between these two compo-
nents.
There exist two alternatives for virtualiza-
tion (Goldberg, 1974). In one of these, the hypervi-
sors interact directly with the hardware. The hypervi-
sor is in this case a kernel-mode driver (or module) of
the host operating system (OS). In the alternative one,
both the hypervisor and the VMs execute on a top of a
standard OS which offers access to the host resources,
including its devices. The advantage of the first option
is clear: since it provides direct access to the hard-
ware, avoiding interaction through multiple software
layers, in principle its peak performance can be close
to that attained from a native execution. Xen, KVM
and VMWare (Nieh and Leonard, 2007) are examples
of the first type of virtualization, while QEMU (qem,
2013) and VirtualBox (vir, 2013) follow the nonnative
alternative.
There are several important aspects that have to
be considered when applying virtualization (Younge
et al., 2011; Younge et al., 2010), independently of
whether the target is a data processing center, where
throughput is the fundamental driving force, or an
HPC facility, which may be willing to trade-off work-
load productivity for application performance. One
particular aspect asks for an assessment of the bal-
ance between the costs (i.e., negative performance
impact) and the benefits of accommodating virtual-
ization. Fortunately, many processor architectures
nowadays feature hardware support to amend possible
penalties resulting from virtualization. Furthermore,
there is a continuous research to address these over-
heads also from the software viewpoint (e.g., paravir-
tualization that offers virtualized memory addresses;
previrtualization software to adopt a certain hypervi-
sor while, simultaneously, maintaining compatibility
with the physical hardware; etc.)
On the other hand, applications can also benefit
from embracing virtualization. For example, an OS
can be tuned to improve application performance by
letting the hypervisor control allocation of resources
among applications such as a fixed memory space, a
certain fraction of the processor cycles or, in VMs
running with real-time constraints, the maximum la-
tency for interrupt handling. In most of these situa-
tions, as a single VM runs in a virtual environment
isolated from those of other VMs, an application fail-
ure will only affect the causing VM. Thus, in case
the VM cannot recover, all its resources can be real-
located by the hypervisor to a different VM.
An additional advantage of virtualization is de-
rived from the possibility of running a collection of
the logical nodes of a virtual cluster concurrently on a
smaller number of physical nodes of an actual cluster.
Under certain conditions (workload characteristics,
service level agreements, etc.), this in turn enables the
deployment of a virtualization-aware energy-saving
strategy where servers are consolidated on a reduced
number of physical machines, which may render a re-
duction of energy consumption, both by the process-
ing equipment and the infrastructure (e.g., cooling,
UPS, etc.). A step further in this line is to adopt a
dynamic strategy for consolidation that adaptively se-
lects the number of active physical servers depending
on the workload, migrating VMs to allow consolida-
tion, and turning on/off unused nodes (Kusic et al.,
2009).
Live migration of VMs is crucial to leverage the
energy savings potentially yielded by server consol-
idation. In this process, a VM that is running on a
physical server A is migrated to an alternative server
B, transparently to the user that is running his appli-
cations in the VM. For this purpose, i) all the memory
in use by the VM has to be migrated from A to B; next
ii) those memory pages that were modified by the VM
on A since the migration started are copied to B; and
finally iii) the process is completed with the transfer
of the current processor state for the VM from A to
B.
3 KVM
KVM is an open source software for full virtualiza-
tion of x86 hardware. From the implementation point
of view, it is a Linux kernel module that operates as
a type-I hypervisor, providing the functionality that is
needed to run VMs on the host platform. The integra-
tion of KVM into Linux offers two major advantages:
first, all enhancements to Linux can be automatically
leveraged from KVM; and second, KVM developers
only need to tackle with the optimization of the appli-
cations running on the VMs being thus isolated from
the underlying software layer (OS).
The main characteristics of KVM are:
Scheduling, Resource Control, and Memory
Management. VMs run in KVM as regular Linux
processes. Therefore, all the kernel-level man-
CLOSER2014-4thInternationalConferenceonCloudComputingandServicesScience
514