benefit from the aggregated power of processing re-
sources. Fair-share can be observed over long periods
of time (Jackson et al., 2001; Kay, J. and Lauder, P.,
1988; Jackson et al., 2001) or over few clock cycles
(Linux systems). In most of scheduling systems, it is
implemented through dynamic priority policies guar-
anteeing higher priorities to jobs or groups of jobs that
have used few resources.
This approach does not seem to be suitable in the
Ciloe case. Indeed, the execution of some tasks could
be significantly delay if all resources are used by long-
term jobs. Yet, even if the partners want to bene-
fit from the aggregated power of computing nodes,
they would expect a reasonable waiting time for their
tasks. For a given partner, this expectation could be
particularly high when the amount of resources he has
used is less than the ratio of resources for which he
has invested.
3.2 Efficient Use of Computing
Resources
The need of using computing resources efficiently
led to the emerging of advanced scheduling poli-
cies such as backfilling (Lawson and Smirni, 2002),
against the classical first-come first-serve (FCFS) pol-
icy adopted in earlier job schedulers. A backfilling-
applying scheduler allows newer jobs requiring less
resources to be executed than the former ones. De-
spite the fact that it may lead to starvation issues for
big jobs, the backfilling approach allows a more ef-
fective utilization of resources avoiding wasting idle
time.
Our approach is to carry out several novelties. The
design of the SaaS resource manager we have pro-
posed is opened from its architecture to internal poli-
cies and algorithms of scheduling jobs. It relies on
a generic model of resource management that con-
siders that the underlying computing infrastructure is
shared among several applications owned by distinct
SaaS providers. The policies and algorithms used to
enforce this sharing permit to guarantee shares of re-
source use to each application, even in period of high
load and/or high competition to accessing resources.
Since the effectiveness of the utilization of the whole
resources is a major point in our project, we also al-
low backfilling.
4 THE PROPOSED
FRAMEWORK
As previously mentioned (Chakode et al., 2010), it
has been shown that scheduling SaaS requests on-
demand upon such a shared cluster should enable flex-
ibility and easy reconfigurability in the management
of resources. A dynamic approach of allocating re-
sources has been proposed. This approach aimed at
guaranteeing fair-sharing statistically, while improv-
ing the utilization of whole system resources.
We think that using virtual machines would be a
suitable solution to implement this approach. The
suitability of virtual machines for sharing comput-
ing resources has been studied (Borja et al., 2007).
The authors have claimed and shown that using vir-
tual machines allows to overcome some scheduling
problems, such as schedule interactive applications,
real-time applications, or applications requiring co-
scheduling. Furthermore, virtual machines enable
safe partitioning of resources. Being easily allocable
and re-allocable, they also enable easy reconfigurabil-
ity of computing environments. While the main vir-
tual machines drawback has been performance over-
head, it has been shown that this overhead can be
significantly reduced with specific tuning, see for ex-
ample the works introduced in (Intel Corporation,
2006), (AMD, 2005), (Yu and Vetter, 2008), (Jone,
), (Mergen et al., 2006). Tuning being typically
implementation-dependent, we do not consider this
aspect in this work.
4.1 Global Architecture
In the model we have proposed, jobs would be run
within virtual machines to ensure safe node partition-
ing. The cluster is viewed as a reconfigurable virtu-
alized infrastructure (VI), upon which we build a re-
source manager component to deal with handling re-
quests, and scheduling the associated jobs on the un-
derlying resources pool. This system architecture is
shown on Figure 2. At the infrastructure level, the
scheduler relies on a VI Manager (VIM), to deal with
usual virtual machine live-cycle management capa-
bilities (creation, deployment, etc.), over large scale
infrastructures resources. Among the leading VIMs
which are OpenNebula (Sotomayor et al., 2009b),
Nimbus (Keahey et al., 2005), Eucalyptus (Nurmi
et al., 2009), Enomaly ECP (eno, ), VMware vSphere
(vmw, ), we have chosen OpenNebula considering
the following key-points, which are further detailed
in (Sotomayor et al., 2009a). OpenNebula is scal-
able (tested with up to 16, 000 virtual machines), open
source and it uses open-standards. It enables the pro-
grammability of its core functionalities through appli-
cation programmable interface (API), and supports all
of popular Virtual Machine Monitor (VMM) includ-
ing Xen, KVM, VMware, and VirtualBox.
CLOSER 2011 - International Conference on Cloud Computing and Services Science
354