location (if appropriate). This article is focused on
identifying an optimal solution to problems (b), (d)
and (e), where optimal means the cheapest solution
meeting operational requirements.
The three key operational requirements are
performance, scalability and dependability. As to
performance, the aforementioned IBM CDAT tool
measures the average and peak CPU utilization; it is
therefore possible to select the server models and
configuration in such a way that performance
requirements are met. As to scalability, the model
will be extended to cover scalability issues in future
work. As to dependability, two High Availability
(HA) cluster configurations have been considered
(see Marcus 2000): a) 1-to-1 (symmetric), b) load
sharing. 1-to-1 HA clusters consist of two nodes that
deliver different services (even when the two nodes
host the same type of application the two servers
deliver different services). 1-to-1 clusters can be
configured in asymmetric mode (also known as
Active-Passive) or symmetric mode (also known as
Active-Active). In the asymmetric configuration
server applications run on the two servers but only
one machine delivers service to users while the
second one is in standby. In the Active-Active
configuration, vice versa, server applications are
installed on the two machines but only one instance
of a server application is executed on the cluster.
The asymmetric configuration makes a suboptimal
usage of the resource and therefore it is implemented
only when the symmetric configuration is not
supported. Load sharing HA clusters consist of two
or more nodes that deliver the same service. The
multiple nodes extensions of 1-to-1 HA cluster (e.g.
N-to-1 in which multiple nodes can fail over one
standby node), albeit considered in the analysis, are
not widespread enough in the Intel-based servers
market and therefore have not been described in this
article.
This paper is the result of a joint project between
IBM and Politecnico di Milano. In a previous work
(see (Ardagna and Francalanci 2002), (Ardagna et
al. 2004) and references therein), we have developed
a cost oriented methodology and a software tool,
ISIDE (Information System Integrated Design
Environment) for the design of the IT architecture.
In this paper, we apply our tool to four server
consolidation projects implemented by IBM for their
customers, in order to evaluate the quality of our
solutions. The results we obtained show that ISIDE
can identify a low cost candidate solution, which can
be refined by the project team experts, which
reduces the cost and time of server consolidation
projects. The current version of the tool does not
consider scalability issues. However, the tool can be
extended in order to entirely support the server
consolidation process considering additional
constraints.
This paper is organized as follows. The next section
reviews previous approaches provided by the
literature. Section 3 discusses a model for an
Enterprise-wide Information System which supports
a server consolidation project. Section 4 describes
the current version of ISIDE which has been adopted
to investigate case studies discussed in Section 5.
Conclusions are drawn in Section 6.
2 RELATED WORK
A server consolidation project is a special case of
design of an IT infrastructure. Modern
infrastructures are comprised of hardware and
network components (Menascé and Almeida 2000).
Since hardware and network components
cooperatively interact with each other, the design of
the IT infrastructure is a systemic problem. The
main systemic objective of infrastructural design is
the minimization of the costs required to satisfy the
computing and communication requirements of a
given group of users (Jain 1987; Blyler and Ray
1998). In most cases, multiple combinations of
infrastructural components can satisfy requirements
and, accordingly, overall performance requirements
can be differently translated into processing and
communication capabilities of individual
components. These degrees of freedom generate two
infrastructural design steps: a) the selection of a
combination of hardware and network components;
b) their individual sizing.
Cost-performance analyses are executed at both
steps. Performance analyses receive a pre-defined
combination of components as input and initially
focus on the application of mathematical models to
define the configuration of each component
(Lazowska et al. 1984; Menascé and Almeida 2000).
Conversely cost analyses start at a system level, to
identify a combination of components that
minimizes overall costs, which is initially calculated
from rough estimates of individual components’
configurations and corresponding costs (Blyler and
Ray 1998; Zachman 1999). The evaluation of costs
of individual components is subsequently refined
based on more precise sizing information from
performance analyses.
The literature provides various approaches to
support the design process, especially in the
performance evaluation field (Menascé and Gomaa
2000) or for specialized applications (Gillman et al.
2000) and often only a limited set of architectural
variables or sub-problems are considered (e.g. the
ICEIS 2005 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
324