by means of comprehensive emulation-based exper-
iments on real protocol implementations.
The results show that losses at the end of a con-
nection increase the response times of web services
with perceptible amounts in practically all cases, with
some configurations resulting in response time in-
creases larger than 1,7 seconds.
This paper is structured as follows. In the next
section some background on web services and TCP is
provided. Then follows a section describing the ex-
perimental setup and results. The next section pro-
vides a discussion of possible solutions to the prob-
lems evident in the results, and lastly the conclusions
are provided.
2 BACKGROUND
2.1 Web Services Background
Web services can be seen as a means to enable dis-
tributed computing. In this respect it shares some
of the goals of technologies such as CORBA (Object
Management Group, 2004), RPC (Srinivasan, 1995)
and Java RMI (Sun Microsystems, Inc., 2004). Web
services strive to enable remote execution with a min-
imum of interdependency between the parties. One
way of accomplishing this is by the use of open, plat-
form neutral technologies such as HTTP, SOAP and
XML (Bray et al., 2006). Although web service mes-
sages are typically exchanged between program pro-
cesses without direct involvement of the users in the
web service transaction, a human user is often the ini-
tiator of the action that causes the web service trans-
fer to take place. A human user is often also the end
consumer of the information resulting more or less
indirectly from the web service transaction. In or-
der to minimize the user discomfort caused by having
to wait before receiving any feedback, the response
times of web services are an important component.
Furthermore, the spreading of techniques such as
web services mashups (Lerner, 2006) further high-
lights the delay characteristics. Mashups are compo-
sitions of two or more web services and are typically
intended for direct end-user usage. The more web ser-
vice transactions that are involved in one user interac-
tion, the higher the risk that at least one transaction
will be subject to a packet loss with a resulting unde-
sirable increase in response time. The response time
of a web service can be subdivided into smaller com-
ponents. One subdivision can be made between pro-
cessing delays and network delays. Processing delays
are a function of the processing needed at the client
and server to create messages, parse messages and do
the actual execution. Network delays are caused by
the delays inherent in transferring the requests and re-
sponses between the client and server. In this paper,
the focus is on the network delays, and how they are
affected in the presence of loss.
2.2 Transport Layer Background
TCP is the transport layer protocol used by web ser-
vices. While the original TCP has been existing for
over 30 years, it has continuously been updated, and
continues to be updated, to address new challenges
caused by the evolving communications technology.
In the context of this study, the most relevant aspects
of TCP functionality is the reliability and congestion
control mechanisms as those are related to how TCP
handles losses. A TCP sender has two mechanisms to
detect losses, fast retransmit and timeout.
Fast retransmit (Allman et al., 1999) occurs when
the sender receives three duplicate acknowledge-
ments. The duplicate acknowledgements are sent by
the receiver when it receives packets out-of-order.
The reason for an out-of-order packet is either that
packets have been reordered in the network, or that a
packet has been lost in the network, causing all the
following packets to be out-of-order. The fast retrans-
mit threshold of three was set as a trade-off between
having the sender mistakenly treat reordered pack-
ets as lost, and the delay before retransmission of a
packet that has been lost.
Timeouts occur when the TCP sender has not re-
ceived an acknowledgement for a certain period of
time. In order to avoid having retransmissions for
packets that are not lost but merely delayed in the net-
work, the timeout value is conservatively calculated
as a function of the round-trip time as measured by the
the returning acknowledgments. So, for the timeout
case the trade-off is between having a short enough
timeout that detects losses without unnecessary delay,
but not so short as to induce unnecessary retransmis-
sions when the packet is not lost but delayed.
Out of the two described loss detection mecha-
nisms, fast retransmit is the most desirable as it will
lead to faster loss detection in practically all cases
1
.
However, there are cases when fast retransmit cannot
be used, and one important case in the web services
context is at the end of connections. If there are too
few packets to send after a loss, the receiver will not
be able to generate the required number of duplicate
acknowledgements. For the short connections typical
in web services, this sensitive period late in the con-
1
Fast retransmit also has a gentler congestion response
than timeout, but in the present study this has practically no
effect since the examined web service transfers are so short.
WEBIST 2007 - International Conference on Web Information Systems and Technologies
276