the power estimation error is between 0 and 15% for
many different benchmarks, including the SPEC 2006
suite. (Economou et al., 2006) models the energy
consumption of a server as a linear model of CPU,
memory, disk and network utilization. The predic-
tion error is almost below 5% for all the validation
benchmarks. (Rivoire et al., 2008) compares different
full-system power models, with the key observation
that multi-dimensional models (disk and performance
counter based) performs better that models based only
on CPU usage. (McCullogh et al., 2010) evaluates the
effectivenessof some power models. As the complex-
ity of current processors increases, linear models fits
poorly, but the article itself notes that the 2-6% error
made from linear models is well within the accuracy
for tasks like data center server consolidation.
Many models for allocating resources for cloud com-
puting have been developed to be energy aware. Al-
most all consider only CPU as the resource to be
allocated, and the power model is typically linear,
with a server idle power around 50-70% of the peak
power. Some of these models take into account the
critical Power Usage Effectiveness (PUE) parameter,
that defines the total amount of electricity required by
a data center, which is made up of what’s required
for cooling, general operations, lost on the transmis-
sion lines or by the AC/DC conversion. It’s widely
known that the lowest PUE is on Google data centers,
and is around 1.2 (which means that for each 1 kW
required to power on the computing resources, only
additional 0.2 kW are required for cooling and every-
thing else), where a typical PUE for a standard data
center is around 1.4-1.7, and for an enterprise data
center could climb up to 2.0-3.0.
(Cardosa et al., 2009) considers only CPU as the
resources to be allocated in a cloud environment, with
a fixed cost for each server turned on. With such as-
sumptions, the optimization model tries to reduce the
number of servers to be allocated. (Gandhi et al.,
2009) relates the CPU power to the frequency, with
a fixed minimum to account for idle systems. Even
if a cubic curve fits better the empirical data, a linear
fit is also deemed as sufficiently accurate. (Urgaonkar
et al., 2010) considers a quadratic model that relates
the CPU usage to the system power, considering an
offset accounting for the idle power of the system
around 65% of the peak power. (Mazzucco and Du-
mas, 2011) considers the power drained of the CPU
as a linear function of the load, with an idle power of
about 65%. (Srikanthaiah et al., 2008) develops an
empirical model that relates the system’s overall en-
ergy consumption to both CPU and disk utilization,
finding that the optimal combination that minimizes
the energy for computed transaction is around 70%
CPU and 50% disk utilization. From this on it devel-
ops an optimization problem as a multi-dimensional
bin-packing problem.
3 RESOURCE ALLOCATION FOR
CLOUD SYSTEMS
We briefly recall some strategies for resource allo-
cation on cloud computing platforms. At this level,
resource allocation is defined as a virtual machine
placement problem: considering a set of virtual ma-
chines, what is the best way to place them into some
powerful physical hosts? This consolidation process
aims to achieve operational efficiency, increasing the
usage of physical resources: each physical host typ-
ically allows for some virtual machines to be placed
into it. Even if this could result in contention of phys-
ical resources (usually mitigated by the Virtual Ma-
chine Monitor), the savings are economically sound-
ing for the CSP, which could offer a competitive price
for the use of its resources, usually with an hour gran-
ularity for the rent and without upfront costs for the
CSC. The CSP has also operational costs, including
the electricity bill, that on the contrary are affected
by this consolidation process: a physical hosts offer-
ing computing power to fewer virtual machines con-
sumes less power than an almost fully loaded hosts.
This means that the CSP must carefully balance be-
tween this somehow conflicting goals. (Beloglazov
and Buyya, 2010) considers only CPU, and models
the problem as a bin packing optimization, where the
different physical servers use Dynamic Voltage Fre-
quency Scaling (DVFS) to change their CPU frequen-
cies according to the amount of virtual machines al-
located over them. (Lu and Gu, 2011) has a multi-
dimensional model of resources allocations, and op-
timizes it using an ant-colony algorithm. (Chang
et al., 2010) considers that the available virtual ma-
chines from a CSP are fixed in size, so the problem is
to map these allowed capacities into a set of virtual
machines requirements, avoiding unnecessary over-
provisioning and reducing migration overhead. The
lack of available dataset forces the authors to compare
the different algorithms only in relative terms.
4 FORMAL MODEL
We consider the point of view of the CSP: the CSC
has submitted a lists of virtual machines requirements
(in terms of CPUs, memory, I/O and network guar-
anteed bandwidth). Some (or all) of these virtual
SMARTGREENS2012-1stInternationalConferenceonSmartGridsandGreenITSystems
248