Dynamic Web Workload Distribution Test from 0 Rps to 1000 Rps on
Cluster-based Web Server System with Locality-based
Least Connection Algorithm
Nongki Angsar, Maria D. Badjowawo and Marthen Dangu Elu Beily
Electrical Engineering Department, State Polytechnic of Kupang, Kupang, Indonesia
Keywords:
Distribution Test, Web Server, Cluster.
Abstract: The growth of web traffic and network bandwidth which is quicker than the growth of microprocessor these
days cause single server platform no longer be adequate to fulfill the requirement of web server system
scalability. Plural server platform is the answer. One of solutions which have been recognized is cluster-based
web server system. This research did some dynamic web workload distribution tests on a cluster-based web
server system by generating HTTP workloads dynamicly, with continuous changing HTTP request rate from
0 request per second (rps) to 1000 rps, from client to web server system pool. In this research, result of
dynamicly testing with continuous changing HTTP request rate from 0 rps to 1000 rps shows that HTTP
requests were well-distributed to web server system pool by Locality-Based Least Connection Algorithm.
HTTP reply rate, TCP connection rate, and throughput tend to increase linearly with the increase in HTTP
request rate. While response time and error almost equal to zero with the increase of HTTP request rate.
Correlation between linearity and the zero of error is, at the point 0 rps to 1000 rps, almost all of HTTP
requests were replied by the pool of servers.
1 INTRODUCTION
Along with the complexity of web service and
application in so many areas, hence web service
request from user become progressively high.
Example of popular web services and applications are
business service and application (e-business),
education (e-learning), news (e-news), and others.
Also with the growth of network infrastructure
and computer communication become progressively
good in recent years. Application of optical fibre on
cables (Roger, 1998), Gigabit Ethernet on LAN
(William, 2000), broadband- ISDN on WAN
(William, 2000), xDSL digital transmission on
telephone line (William, 2000), and cable modem
make network bandwidth become bigger. Even a
prediction which is
made by George Gilder in 1995
said that the growth of network bandwidth will be
multiply thrice every year (Gray, 2000). This
prediction still go into effect, special for the optical
fibre, refers to article made in 2008 (Gilder, 2008).
On the other side, computer growth (sum of
transistors in a microprocessor chip), according to the
prediction of Intel founder, Gordon Moore in 1960
will only be multiply twice every 18 months (Intel,
2003). This
prediction have been proven through years untill now,
and usually referred as Moore’s Law.
According to these two predictions, the network
bandwidth growth will be multiply twice than
computer growth, and the possible bottle-neck will
lay in server side.
2 LITERATURE REVIEW
According to Cardellini et al (Valeria, 2001), there
are two efforts which can be done: (1) scale-up effort
(single
platform server and (2) scale-out effort
(plural
platform server). First effort is good enough,
however having some weakness. First, requiring big
expense
to keep pace with recent technology. Second,
can not
eliminate the fact that single point of failure
(SPOF) is on server itself. Third, availability and
continuity will be disturbed at the time of server
scalability improvement. Fourth, replacement to new
hardware
cause old hardware tends to be useless in
system.
While second effort, on the contrary, cheaper