Utility Computing Paradigm and SOA Philosophy
Ivan Ivanov
Empire State College, State University of New York, Long Island Center
Hauppauge, NY 11788, U.S.A.
Ivan.Ivanov@esc.edu
Abstract. The purpose of this paper is to sort out, to the extent possible, the
contentious discussion regarding the impact of service oriented architecture to
utility computing. How useful is this philosophy in conjunction with utility
computing approaches on organizational IT strategies, business processes and
directional models? The change to IT utilization is being driven by the
infrastructural advantage and economic leverage of the Internet in combination
with imperative industry trends: commoditization of IT, Service-Oriented
Architectures (SOA) and Virtualization of Services and Applications. These
trends include several distinct innovations such as:
the use of multiple servers to replace large expensive systems (IT
commoditization);
the componentization of flexible application building blocks that can be
easily assembled into large, composite business specific applications
(Service Oriented Architectures);
the virtualization of operating systems, data storage, network resources,
computing power (grid computing) and applications (as a top layer of
virtualized services).
The business approach seems to achieve the transformation of IT from an inert
monolith to a dynamic, business adaptive model. This forms the Utility
Computing paradigm. However, the question remains, how well do the UC
models synthesize with the agility provided by SOA philosophy to enable a
continuous optimization of business processes?
1 Introduction
Information Technology (IT) has had a profound impact on organizational strategies,
infrastructure, services, and business models in the last several decades. The
transformations inside IT industry as applications, services, and solutions have been
changing persistently because of customers’ needs and business necessities.
In the early stages in the development of information and communication
technology, there are few standards, majority proprietary (company specific) products
and solutions, limited applications, and deficient distributed network; as a result, IT
has been impossible to provide expected economies. IT solution and services were
usually regionally dependent and fragmented by product and applications. Such
fragmentation has appeared intrinsically lavish for the businesses. It compelled large
capital investments, heavy fixed IT expenses: both in the technology and in
operational costs (administration, monitoring, and maintenance), resulting in high
Ivanov I.
Utility Computing Paradigm and SOA Philosophy.
DOI: 10.5220/0004464700350047
In Proceedings of the 2nd International Workshop on Enterprise Systems and Technology (I-WEST 2008), pages 35-47
ISBN: 978-989-8111-50-0
Copyright
c
2008 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
levels of overcapacity. The situation has been ideal for the suppliers of the technology
components and infrastructural builders, but it has been ultimately unsustainable.
The economic difficulties in 2000s, the cost-effective strength of the Internet and
some new technological advantages have made businesses more vigilant and more
demanding about the return of their IT infrastructural investments. As a crucial
business resource IT has matured and came to what economists describe as a General-
Purpose Technology (GPT), sharing four specific characteristics of GPTs:
Wide scope for improvement and elaboration,
Applicability across a broad range of uses,
Potential for use in a wide variety of products and processes,
Strong complementarities with existing or potential new technologies [4].
Because of the broad range of employments, variety of products and applications,
IT as a typical GPT proffers the potential of considerable economies of scale if their
supply can be unified and consolidated. The business approach seems to achieve the
transformation of IT from an inert monolith to a dynamic organism, better adaptive to
the business needs model. While delivering IT as utility has been recognized and a
central distribution becomes possible, large-scale utility suppliers arise and displace
the smaller product-specific providers. Although companies may take years to
abandon their proprietary supply IT operations and all the sunk costs they represent,
the savings offered by utilities eventually become too compelling to resist, even for
the largest enterprises[3].
The transformation to IT utilization is being driven by the infrastructural
advantage and economic leverage of the Internet in combination with imperative
industry trends that advance and permit realization of over-the-net different delivery
models. These trends include several distinct innovations such as:
the use of multiple servers to replace large expensive systems (IT
commoditization);
the componentization of flexible application building blocks that can be
easily assembled into large, composite business specific applications
(Service Oriented Architectures);
the virtualization of operating systems, data storage, network resources,
computing power (grid computing) and applications (as a top layer of
virtualized services).
The purpose of this paper is to sort out, to the extent possible, the contentious
discussion regarding the impact of service oriented architecture approach to utility
computing models. The rest of the paper is structured as follows: Section 2 - Utility
Computing Paradigm expose the concept, technologies, models and the paradigm sifts
for consumers, vendors and providers of utility computing; Section 3- SOA
Philosophy and IT agility-integration reveals how SOA approaches can be deployed
to achieve agile business integration; Section 4 - The Implication of SOA within UC
structures illustrates some developments in employing SOA approaches within hybrid
utility computing models to advance integration of existing and newly developed
applications and to attain extensive economies; Section 5 – Conclusion, ends the
paper with closing notes.
36
2 Utility Computing Paradigm
2.1 The Concept
Utility computing was first described by John McCarthy at the Dartmouth conference
in 1955 as: "If computers of the kind I have advocated become the computers of the
future, then computing may someday be organized as a public utility just as the
telephone system is a public utility… The computer utility could become the basis of
a new and important industry." The major factors which impeded the development of
computer utilities in the last decades were:
high data communications costs,
timid public attention,
limited number of trained and skilled IT users,
lack of standardization of hardware, software and data communications,
apprehensive compilation of database systems and development tools,
high level of security threats.
Practically fifty years were needed to develop a broad-spectrum of computerized
devices, universal communication infrastructure and over-the-net applications, to
saturate organizations and users with appropriate computer systems and more
adaptive technology solutions. This time period was vital to educate a critical mass of
IT professionals in programming, networking, business productivity systems and web
based applications, and to train a vast majority of end-users how to utilize them [10].
2.2 Utility Computing Technologies
The recent utility computing development as a complex technology involve business
procedures that profoundly transform the nature of companies’ IT services,
organizational IT strategies, technology infrastructures, and business models. Based
on networked businesses and widely implemented Over-the-Net applications, utility
computing facilitates “agility-integration” of IT resources and services within and
between virtual companies.
There is immense variety in possible and actual configurations of technologies
and infrastructure to support utility computing development. According to Alfredo
Mendoza [12], well established and proven technologies like virtualization, advanced
application accounting, and dynamic partitioning, that have long existed in
mainframes and now are available on newer server architectures in combination with
grid computing, web services and hyperthreading technologies are contributing to
create an infrastructure based on the utility model. Other experts believe utility
computing will further evolve into a combination of the related concepts of grid
computing (a type of network-distributed parallel processing), on-demand, and Web
services [18]. The primary newly established technologies for companies seeking a
competitive advantage in utility computing development are grid computing, all forms
of virtualization services and automated provisioning.
37
2.2.1 Grid Computing
In a grid, all of the networked computers are coordinated and act as a single “virtual”
computer. Grids use specialized scheduling software that identifies available
resources and allocates tasks for processing accordingly. "A grid cluster is a collection
of independent machines connected together by a private network with a specific
software layer on top. This software layer has to make the entire cluster look like a
single computing resource." -- Don Becker, CTO, Penguin Computing (a
manufacturer of Linux-based grid solutions), offers a succinct definition of grid
computing.
The key element is that computers, or nodes, in a grid are able to act
independently without centralized control, handling requests as they are made and
scheduling others. Grid computing is the underlying technology for utility computing.
In a long term, grid computing is heading towards a convergence of utility computing
from the pricing and delivery prospective, and Web services-based integration and
virtualized technologies to enable multiple, networked computers to be managed as
one [17]. Amongst systems vendors developing and exploiting grid concepts are HP
with HP Adaptive Enterprise Initiative, Sun Microsystems Network One, IBM’s On-
Demand Computing, and Oracle Grid Computing.
The grid may increase geographically in organizations that have facilities in
different cities and continents. Dedicated communications connections, VPN
tunneling and other technologies may be applied among different parts of
organizations and the grid. The grid may grow to be hierarchically organized to
reduce contention implied by central control, while increasing scalability. With
developing the grid infrastructure, the grid may expand to cross organization
boundaries migrating to “Intergrid”, and may be used to collaborate on projects to
provide brokering and trading resources over a much wider audience; those resources
may be then purchased as a utility from trusted suppliers.
2.2.2 Virtualization
Virtualization services allow servers, storage capacity, network resources or any
virtual application to be accessed and referenced independent of its physical
characteristics and location. Virtualization presents a logical grouping or subset of
computing resources such as hardware, operating systems, storage and applications,
which may be accessed to enhance the original configuration. The improvement with
virtual resources is not limited geographically, by applications, or physically, such as
in configuration. Solution providers can use server virtualization and other virtual
appliances to provide new services. Server virtualization is used to create utility
computing server farms that combine multiple customers' workloads. The cost-to-
customers is based on metrics, such as the gigabytes of memory and disk space used,
computing power or servers needed. This maximizes the customers' ROI with a pay-
as-you-go model. It also allows access to an infrastructure, which operates on-
demand. A server farm can be used to duplicate or expand, rather than replace, a
customer's infrastructure. This may become important if a natural disaster should
happen, for instance, requiring migration of images from the customer's servers to
laptops or another system [15].
Stating it succinctly, virtualization for most vendors specialized in this
technology is an abstract layer that allows multiple virtual machines, with
38
heterogeneous operating systems to execute in separation side-by-side on the same
physical system. Virtualized services allow customers to utilize and expand their
systems in many directions such as:
Server consolidation - combine many physical servers into fewer, highly
scalable enterprise-class servers, which host virtual machines, also known as
physical-to-virtual (P2V) transformation.
Storage virtualization – high-speed data-storage switched networks, such as
Storage Area Networks (SAN) and Network-attached Storage (NAS), provide
shared access to many storage devices, virtual file servers or file systems.
Network virtualization – segregates the inbuilt network resources into separate,
distinct and secure channels and devices composing virtual private networks
(VPNs), “demilitarized zone” in the context of firewalls, load balancers and
voice over IP services.
Disaster recovery and business continuity protection - alters historical backup-
and-restore (virtual machines are used as "hot standby" environments, which
allow backup images to migrate and "boot" into live virtual machines).
Streamline Testing and Training - hardware virtualization allows root access to
a virtual machine that is useful in kernel development, operating system
training and application testing.
Portability for Applications and Automation Capabilities - applications
virtualized for portability will remain portable, while virtual appliances
combine simple deployment of software with the benefits of pre-configured
devices.
Streaming Applications and Secure Enterprise Desktops - virtualized software
locked down onto the local Desktop, by providing a standard corporate
desktop image in a virtual machine, while the standardized desktop enterprise
environment is hosted in virtual machines accessed through thin clients or PCs.
VMware, is one of the leading providers for virtualization technology systems. As
said by VMware president Diane Greene “Once you aggregate your hardware
resources, you can allocate a certain amount of CPU power, memory, disk and
network to a group of virtual machines, and it will be guaranteed those resources. If
it’s not using them; other virtual machines will be able to use those resources… It’s
utility computing made real and working” [7]. Recently launched Virtual Application
Environment by Microsoft provides application extensive virtualization that can be
layered on top of other virtualization technologies – network, storage, machine – to
create a fully virtual IT environment where all computing resources can be
dynamically allocated in real-time based on real-time needs. Applications are turned
into on-demand utilities that can be used on any system, easy to dynamically add,
update and support, creating nimble business environment, using minimal time and
resources [13]. Virtualization techniques might affix a little higher operating costs and
complexity compared to nonvirtual settings, but there many other capabilities and
advantages of having virtualized resources that will bring much higher economies and
reliability.
2.2.3 Provisioning
Utility computing is generally a provisioning model - its primary purpose is to only
provide a service when, how, and where it is needed. Automated or manual
39
provisioning of resources in a large scale provides access to new servers or additional
capacity in an automated and “on-the-fly” manner. Since utility computing systems
create and manage many, and simultaneous, occurrences of a utility service, each one
providing application functions, it becomes necessary to establish provisioning
policies. The Internet Engineering Task Force (IETF) has adopted a general policy-
based administration framework with four basic elements: (1) a policy management
tool, (2) a policy repository, (3) a policy decision point, and (4) a policy enforcement
point.
The market and technology leader in this technology trend IBM, has
implemented three main categories of policies related to the provisioning of services
within a utility computing system:
the service provider (SP), who deal with the sharing of the computing
infrastructure among different on-demand services (ODS),
the utility computing service environments (UCSE), which deal with policies
associated with the allocation and management of computing resources
supporting a given ODS, and
the resource managers, who deal with the administration of pools of specific
resources.
The type of provisioning provided depends upon the utility model implemented.
For a storage area network (SAN), for example, provisioning involves assigning
process space to optimize performance. IBM's on-demand architecture considers each
instance of a utility service a "utility computing service environment" (UCSE).
Recently, many companies are retooling their infrastructure to incorporate
virtualization technologies to work with policy-based automation management
software geared toward automated provisioning. The increasing need for more
flexible IT services will gear to a more consolidated and automated infrastructure
environment [12].
The above described utility computing technologies are supported by further
advances – the increased deployment of blade servers, inexpensive high-speed
networks development, the adoption of open source technologies and software as a
service approach, evolving policy-based automation and application management
software to streamline over-the net application allocation and management. With their
modular uni-, dual- or multiprocessor architecture, blade servers offer tremendous
space saving, solid performance and easy of management environment. All these
tendencies sit well with virtualization, grid computing and the allocation of
computing resource on-the-fly.
2.3 Utility Computing Model and the Paradigm Shift
The term "utility computing" is still pretty new and the phrase generates confusion,
since it is commonly used to describe a technology as well as a business model. The
difficulty is that computing is not nearly as simple as conventional utilities.
Computing involves a vast amount of context, as opposed to volts, amps and watts for
the most complex other public utility - the electricity. The utility computing uniquely
integrates storage, applications, computational power and network infrastructure as a
foundation for business adjustable IT services. In the ultimate utility computing
40
models, organizations will be able to acquire as much IT services and applications as
they need, whenever and wherever they need them.
Utility computing is a model that allows breaking down IT infrastructure into
discrete pieces that can perform different and separate business functionalities, can be
measured independently, can be turned on and off as necessary [12]. It offers
companies and private users an access to hosted computing services, scalable and
portable business applications through a utility-like, pay-on-demand service over the
Internet network. To achieve cost savings, to reduce IT complexity and to increase IT
flexibility and integration ability when utility computing model is being applied,
suppliers and consumers of utility services need to reach a higher level of
standardization and sharing. The five levels of Continuum of Utilities model,
illustrated by Alfredo Mendoza in Utility Computing: Technologies, Standards, and
Strategies, exposes some critical developments and infrastructural transformations
towards approaching a higher level of standardizations, consolidations and sharing:
Level 1 - Utility Pricing – New technology enables utility like functionalities
and pricing in services within the infrastructure. Typical examples are:
capacity on demand, on-demand computing, pay-per-use, pay-per-service
where utility suppliers and consumers specify the scope and the frame of
computing services and negotiate the utility pricing model
Level 2 – Infrastructure Utility – At this level, new technologies such as
virtual servers, storage and networks with advanced partitioning, automated
provisioning and policy-based management facilitate processes of virtualized
operating environment and allocate resources as and where needed
Level 3 – Shared Application utilities – Architectural changes to enterprise
software applications derived from Service-oriented architecture (SOA)
implemented as Web services, and metering Software as a Service (SaaS) for
enterprise applications transform single instance applications into multi-tenant
applications served over-the-net
Level 4 – Shared Process Utilities – At this level, companies identify business
functionalities that are non-strategic, deconstruct to similar functional
components within different processes to externalize or shared with other
organizational entities within the networked environment
Level 5 – Virtual Infrastructure Utilities – At the last most advanced level
infrastructure utilities begin to share resources with each other.
Communications between utilities are done through industry-standard
protocols and format such as data communications markup language (DCML)
and are possible by sharing resources between separate data centers through
the use of grid infrastructure or utility computing environment.
The utility computing model creates a substantial magnitude in the paradigm shift
for vendors, providers and consumers of computing power and IT services. Risk-
reluctant organizations would take more discrete phase based approach when
applying some utility like services without affecting critical business systems.
Imaginative providers will reflect the new paradigm by offering a variety of utility-
based options - from specific customized systems through hybrid stepwise services to
total utility solutions. In the following paragraphs are listed some of the key steps and
techniques companies switching to the new utility computing paradigm should
consider from consumers’ and providers’ prospective. According to Gardner Group
41
study the utility computing suppliers are going through five stages to build their utility
infrastructure: (1) concentration of resources, (2) consolidation of assets, including
infrastructure facilities, (3) virtualization of services, (4) automation of processes, and
(5) extension of services and solutions. Firms move from one stage to the next, with
each stage firmly established before going to the next [6].
The leading companies in the utility paradigm recently are in late stage 3 or stage
4; they make available a wide range of automation processes and business operations
deployment based on virtualized computing resources and services. Sun with its N1
architecture, Grid compute utility and StorEdge services provide virtualization of data
center resources, dynamic allocation of IT applications, automation of installation,
configuration, accounting and reporting on per-service basis deployment. HP
Adaptive Enterprise is the HP shift to utility computing development. The HP strategy
is to deliver virtualization technology and utility computing services at different
product levels: individual or element-based virtualization, integrated virtualization,
and metered, managed and instant capacity operations. While the diversity of utility
like options is substantial and HP is acting as a typical IT utility provider, there are
some strategic HP advances in servers, storage, imaging and printing services. In
2006, HP won two multiyear $440M utility computing contracts from the United
States Federal Government. Based on its worldwide communication network,
specialized services and cross-platform expertise, HP deploys adaptive infrastructure
using HP Integrity and HP ProLiant servers, and delivers software solutions for
automated server provisioning, configuration, patch and IT asset management. HP
discontinued its Utility Data Center (UDC) monolith initiative advancing more
flexible and granular utility computing services such as imaging and printing
operations, server and storage virtualization and automated provisioning on modular
platforms to target larger costumers’ groups and variety of business expectations.
IBM On Demand strategy is the company’s complex utility computing model,
which incorporates infrastructure virtualization and management technologies,
application hosting services and business process operations. IBM has proved its
leading expertise in this realm with many successful utility projects from modular
business specific applications to the most comprehensive IT solution to American
Express announced in late 2002. “Today American Express is placing itself at the
forefront of a new computer services paradigm,” said Doug Elix, IBM senior vice
president and group executive, IBM Global Services. “The utility computing service
delivery model American Express is adopting will give it the flexibility to draw on all
the computing resources, skills and technologies required to support future growth.”
The agreement saves American Express hundreds of millions of dollars in information
technology costs, and having IBM’s resources on demand provides AmEx with the
flexibility to adjust rapidly to changing business needs.
The pragmatism that drives most organizations as consumers into utility model is
not only immediate cost savings, but also how IT is structured and managed,
accounted for, and used to enable businesses to improve their efficiency and
effectiveness. In today’s world, IT differentiation in products or services is unlikely to
be achieved; therefore more executives are looking to business process innovation as
a key competitive advantage. Virtually all businesses could take advantage and
building out a company-specific platform by employing best pieces of proved utility
computing options in different timeframe [10]. The timeframe IDC envisages
regarding the major steps customers would advance when they incorporate utility
42
principles methodically and incrementally includes four phases: (1) Virtualization 1.0
– Encapsulation, Resource Sharing and Dynamic Consolidation – 2005, (2)
Virtualization 2.0 – Mobility and Planned Downtime - 2007, (3) Virtualization 2.5 –
Unplanned Load, Alternate Disaster Recovery Workload Balancing – 2009, and (4)
Virtualization 3.0 – Automation Provisioning, Service Oriented and Policy Based
solutions, Variable Costs – 2010+ [8].
The important strategic decision consumers must take is the type of computing
utility: private (in-house) utilities, public utilities or hybrid (selective) utilities. The
answer depends on existing IT resources, infrastructure and professional expertise the
company possesses. Organizations could initiate small pilot projects, examine new
utility type services, and build expertise and confidence with implementation of new
technologies supporting utility computing paradigm. According to leading IT research
institutions (Gartner, Forrester, IDC) the operational costs are between 55 to 75 % of
the total IT costs and they are growing at twice the rate of overall expenses. In 2004,
IDC reported $55 billions are located for buying new servers and $95 billions to
manage them, while for 2008 new servers spending is expected to reach $60 billions
but the management cost would rise to around $140 billions. Employing utility
computing services, organizations could expect 30-65% decrease in operational costs
and over 50-75% savings from total cost of ownership.
3 SOA Philosophy and it Agility-Integration
Service-Oriented Architecture is a software design approach that dissolves business
applications into separate functions or “services” – e.g. check credit history, or open
new account – that can be used independent of the applications and computing
platforms on which they run. When individual functions within applications are all
available as discrete building blocks, companies have the ability to integrate and
group them differently to create new capabilities and align to business processes [9].
This architectural approach is specifically applicable when multiple applications and
process running on various technologies and platforms need to interact with each
other – a recurring scenario within utility computing environment.
SOA is a logical way of designing a software system to deliver services to either
end-user applications or other services distributed over-the-net through published and
discoverable interfaces. The basic SOA defines an interaction between software
agents as an exchange of messages between service client and service provider, while
both parties - provider and client - are respectively responsible for publishing a
description of the service(s) they provide and finding description of service(s) they
require and they must be able to bind them [14]. The SOA offers a model in which
relatively loosely coupled collections of existing IT assets (called services) are reused
and reconnected providing the functionality required by business applications,
management functions, and infrastructure operations. In this way, supply chain
partners and value nets can more easily integrate business processes and applications
across organizational boundaries and achieve better integration, greater flexibility,
and improved ease of cooperation and collaboration [19].
There are many real-world examples of a single or inter-organizational SOA at
work. Amazon uses SOA to create a sales platform with 55 million active customers,
43
and more than one million retail partners worldwide. Up to 2001, Amazon ran a
monolithic, very inflexible, and vulnerable to failures Web server application that
created the customer and vendor interface, and a catalog. Recently, Amazon’s
operation is a collection of hundreds of services delivered by a number of application
servers that provide the customer interface, customer service interface, the seller
interface, billing and many third-party Web sites that run on Amazon’s SOA platform
[5]. A typical inter-organizational SOA coordination is Dollar Rent A Car’s systems
using Web services to link its online booking system with Southwest Airline’s Web
site. Although both companies’ systems are based on different technology platforms,
a person booking a flight on Southwest.com can reserve a car from Dollar without
leaving the airline’s Web site. Dollar used Microsoft .NET Web services technology
as an intermediary to get Dollar’s reservation system to share data with Southwest’s
information systems [11].
Virtually all major IT leaders such as IBM, Microsoft, Oracle, SAP, Sun, HP, and
some software vendors specialized in SOA as BEA Systems, TIBCO Software,
Software, Sybase, Xcalia, Systinet, Zend Technologies provide tools or entire
platforms for building SOA services, integrating software applications and easy of
deployment business operations using Web services. Many of above listed companies
collaborate their efforts in identifying requirements and designing standards to make
data and applications services easier to build and maintain with recent and
forthcoming products and to protect the resources and investment. Current
publications and discussions in various forums, including the Open SOA
Collaboration alliance support the need for an explicit Data Access Service initiative
that builds upon SDO (Service Data Objects) and SCA (Service Component
Architecture) to standardize important aspects of data services from the consumer’s
viewpoint [2].
According to a new Evans Data Corp. study, enterprise adoption of service-
oriented architecture is expected to double over the next two years. Evans Data's
recently released Corporate Development Issues Survey showed that nearly one-
fourth of the enterprise-level developers surveyed said they already have SOA
environments in place, and another 28 percent plan to do so within the next 24 months
[16]. To facilitate the evolution of applications and make quicker responses to the
consumers and businesses needs, the leading companies in SOA structure better the
shared IT services by designing horizontal and vertical layers such as: presentation
services with profiles, business process and activity services, data and connectivity
services, SOA messaging, event processing, management, security and governance.
Layering functionality enables IT systems to offer efficiently tailored capabilities to a
wide variety of service consumers, to easy adapt to new business conditions by
generating an updated services or composing new applications [1].
4 The Implication of SOA within UC Models
Business and IT costumers want to achieve greater agility in their business processes
and larger variety of service applications, whereas utility computing providers want to
reduce costs by consolidating computing power, data storage, information services
and network infrastructure. How well do the UC models synthesize with the agility
44
provided by SOA philosophy to enable a continuous optimization of business
processes and to satisfy both providers and consumers.
The primary technologies that support utility computing model such as
virtualization, SOA and provisioning can mutually interact, be complimentary to one
another and became key enablers for flexibility, efficiency and agility of IT utility like
services if they are designed and implemented correctly. Dynamic provisioning of the
service provider side of the SOA, using virtualization techniques, offers significant
gains to utility providers when they build their SOA services on a virtualization
platform. There are also benefits from the virtualization of the application or service
consumer side based on removing the traditional operating system and allowing more
virtual machines to be accommodated on a physical machine. Partial virtualization of
the SOA service infrastructure is advisable if the service consumer is new either to
virtualization or to SOA technology. Parts of the service may reside on a virtual
platform, while other parts of it may reside on a physical one. A J2EE application can
communicate with legacy mainframe systems using older protocols, but at the same
time present SOA interfaces to its consumers. The J2EE and web services
implementation can live on virtual infrastructure while the older legacy systems on its
original physical platform. The scheme is followed today with Java/J2EE applications
that communicate with legacy systems, and it is equally applicable to the SOA world.
When SOA is implemented across the service provider infrastructure, a large
collection of services is likely to be present. At a minimum, each SOA service
requires a copy of itself to be running on a separate platform, to achieve a level of
fail-over and load balancing. When a set of services undergoes an unexpected
increase in demand, the whole system must be capable of flexing its processing power
to meet that demand, based on the authority and capability of the resource
management tools to expand the pool of resources and services [20].
As utility computing becomes more typical, providers and consumers companies
have to analyze and reengineer their organization IT resources and processes, and
have to develop corresponding changes to IT infrastructure. When implementing
SOA, the role of the IT infrastructure changes toward managing the services, which
support business processes and therefore, lead to more efficient business results. IT
architects with detailed knowledge and understanding to the companies’ business
needs, processes and expectations have to be involved with more collaborative efforts
between previously disconnected professionals like business analysts, infrastructure
analysts and application IT analysts to specify and design a new utility computing
service oriented infrastructure. Service-Oriented Infrastructure as a shared pool of
infrastructure resources that can be dynamically manipulated to align with application
requirements, provides more adaptive and with better performance utility computing
environment.
As the new ideas for innovation in technology always come from customers and
not from technology companies, consumers have a chance to continue to define and
refine their requirements so that vendors and service providers would be able to give
them what they need. The SOA expanding will stimulate additional performance,
flexibility and scalability within services and applications since the immense increase
of componentization and standardization into both providers and consumers utility
computing infrastructure. When synthesizing SOA philosophy by deploying and
realizing business value into utility computing model based on principle of creative
frugality: getting the most out of what already exists, rather than replacing
45
technologies that are working effectively, businesses can attain more rapid return on
lower investment by acquiring the tools and services that make those technologies
more productive and efficient.
5 Conclusions
The paper characterizes the utility computing technologies and the paradigm shits
consumers, vendors and providers would face when applying partially (selectively) or
completely a utility computing model. The role and the advances of SOA approach in
this process of utilizing IT services by composing and reusing business-required
applications in a utility computing environment has been discussed. Service-oriented
computing is a new enormously complex and challenging trend implementing many
technologies that must be elaborate in a coherent manner. The framework of SOC
might bring more complexity and logical classification of creating composite in-house
solutions with external components residing in a virtual utility provider environment.
References
1. BEA Systems: BEA’s SOA Reference Architecture: A foundation for Business Agility, BEA
Systems, Inc. San Jose, CA 95131, U.S.A. (2008)
2. Carey, M.: SOA What? Computer, March 2008, Volume 41, Number 3, IEEE Computer
Society, NY 10016, U.S.A. (2008)
3. Carr, N.: The End of Corporate Computing. MIT Sloan Management Review, Vol. 46 No.
3, Cambridge, Massachusetts, U.S.A. (2005)
4. David, P., Wright, G.: General Purpose Technologies and Surges in Productivity:
Historical Reflections on the Future of the ICT Revolution. Oxford University Press for the
British Academy (2003)
5. Grey, J.: Learning from the Amazon Technology Platform, ACM Queue, No. 4, U.S.A.
(2006)
6. Gray, P.: Manager’s Guide to Making Decisions about Information Systems. John Wiley &
Sons, NJ, U.S.A. (2006)
7. Hammond, S.: Utility Computing: Building the blocks. ComputerWorld, Hong Kong (2006)
8. Humphreys, J.: Themis Delivers Policy-Based Automation Across an Application Portfolio.
IDC, MA, U.S.A. (2007)
9. IBM Global Business Services: Changing the way industries work: The impact of service-
oriented architecture, IBM Global Services, Route 100, Somers, NY 10589, U.S.A. (2006)
10. Ivanov, I.: Utility Computing: Reality and Beyond. In: ICE-B’07, International Conference
on E-Business (2007)
11. Laudon, K., Laudon, J.: Management Information Systems: Managing the Digital Firm,
(10
th
edition), Pearson Prentice Hall, Upper Saddle River, NJ 07458, U.S.A. (2006)
12. Mendoza, A.: Utility Computing: Technologies, standards, and strategies. Artech House,
Norwood, MA 02062, U.S.A. (2007)
13. Microsoft Corp.: SoftGrid® v4: Application Virtualization and Streaming. U.S.A. (2006)
14. Papazoglou, M. and Ribbers, P.: e-Business: organizational and technical foundations,
John Wiley and Sons, West Sussex, England (2006)
15. Roberts, J. and Yacono, J.: Server Virtualization Offers Many Opportunities. CRN:
Iss.1076, NY, U.S.A. (2003)
46
16. SOA World Magazine: SOA Adoption to Double in Enterprise Available: http://soa.sys-
con.com/read/358785.htm (May2008)
17. The 451 Group: Grid Technology User Case Study: JP Morgan Chase. The 451 Group
Report, NY, U.S.A. (2003)
18. Thickens, G.: Utility Computing: The Next New IT Model. Available online at:
http://www.darwinmag.com/read/040103/utility.html (July 2007)
19. Turban, E., King, D., McKay, J., Marshall, P., Lee, J., and Viehland, D.: Electronic
Commerce 2008: A Managerial Perspective, Pearson Prentice Hall, Upper Saddle River,
NJ 07458, U.S.A. (2008)
20. VMware: SOA and Virtualization: How Do They Fit Together? A White Paper from BEA
and VMware, Palo Alto, CA 94304, U.S.A. (2007)
47