data and too fast. In order to avoid long delays when
there is no response from the receiver in a TCP
connection, a time-out mechanism is employed.
Besides a congestion control mechanism is used in
TCP to avoid packet drop due to lack of resources
and buffer space.
In wireless systems most of errors are due to lossy
media. The reason is that in the wireless channels
the main cause for the packet loss may be the high
BER in the channel not the network congestion. So
the low efficiency of the TCP in a wireless channel
is a direct result of the fact that the TCP
misinterprets the packet loss resulting from high
channel error rate or from the congestion. In order to
enhance QoS seen by TCP layer on a wireless link, a
radio link control (RLC) is generally introduced at
link layer. Typically the RLC uses an ARQ error
recovery mechanism to improve the QoS (3GPP TS
25 322, 2007).
RLC is a protocol above MAC and blew RRC.
Every outgoing TCP packet is put into an interface
buffer which is picked up by the RLC. RLC is
responsible for error and flow control (by ARQ
mechanism) of the frames and provides transparent
mode (TM), unacknowledged mode (UM) and
acknowledged mode (AM) services. The RLC
breaks the TCP packet into 10-ms frames and sends
them to MAC. MAC chooses a user queue according
to the scheduling mechanism and after adding a
MAC header sends them to the Physical layer
(Chockalingam and Zorzi, 1999) and (Borgonovo,
2001). In this paper we review the ARQ protocol
effects on the TCP throughput and show how it
improves the throughput. In section 2 we define a
system model and in section 3 we simulate the TCP
and TCP/ARQ protocols in wired and wireless
systems. Finally conclusion will be offered.
2 SYSTEM MODEL
TCP is in layer 4 and locates in the hosts in end
nodes but isn’t a part of UMTS network.
Implementations of TCP contain four intertwined
algorithms: slow start, congestion avoidance, fast
retransmit and fast recovery (RFC, 2001). Although
TCP has been designed, optimized and tuned in
wired networks to react to the packet loss due to
congestion, in wireless systems service degradation
can be due to bit (packet) errors. In UMTS, TCP and
ARQ protocols operate against loss and error in
wired and wireless sections respectively.
TCP in a wireless network experiences several
challenges. One of the issues is how to deal with the
spurious timeout caused by the abruptly increased
delay, which triggers unnecessary retransmission
and congestion control. It is known that the link-
layer error recovery scheme, the channel scheduling
algorithm, and handover often make the link latency
very high. Bandwidth of the wireless link often
fluctuates because the wireless channel scheduler
assigns a channel for a limited time to a user. Thus,
the variance of inter-packet arrival time becomes
high, which may result in spurious timeout. The
Eifel algorithm has been proposed to detect the
spurious timeout and to recover by restoring the
connection state saved before the timeout
(Wennstrom, 2004) and (Gurtov, 2003).
Although the packet loss rate of the wireless link
has been reduced due to link-layer retransmission
and Forward Error Correction (FEC), losses still
exist because of the poor radio conditions and
mobility. Therefore, non-congestion errors could
sharply decrease the TCP sending rate. Packet
reordering at the TCP layer may be caused by link-
layer retransmission, which also results in
unnecessary retransmission and congestion. In the
wireless networks, in general, bandwidth and latency
at uplink and at downlink directions are different.
Hence, the throughput over downlink may be
decreased because of ACK congestion at the uplink
(Lee, 2006).
Now we consider a TCP connection between two
hosts such that the first link on the end-to-end path
from the sender to the receiver is a wireless radio
link (Lee, 2006) and (Canton 2001). Such a scenario
is common in mobile communication and is
illustrated in figure 1(a). The protocol stack on the
way from mobile host to fixed host is illustrated in
the figure 1(b).
We assume there is no packet loss due to
congestion on the wireless link but some packets
may be corrupted under adverse radio link
conditions. In our study, we assume that the bit error
patterns on the radio link are independent. On the
wired network, packets may only get lost when
congestion occurs.
As described in (Lee, 2006) and (Chahad, 2003)
we assume that TCP sends one cumulative
TCP
ACK for b consecutive TCP segments and is
always in congestion avoidance. Besides, Packet
loss is detected in one of the two ways, either upon
reception of a triple-duplicate
TCP
ACK (denoted
by TD), or upon expiration of a Time-Out (denoted
by T0). In case of a TD, window size is decreased by
half, while upon expiration of a T0, it is decreased to
1. Moreover, we assume that the loss behavior is
SIGMAP 2008 - International Conference on Signal Processing and Multimedia Applications
32