forbidden symbol to detect errors in arithmetic codes,
is described, and novel improvements are presented.
The placement of the forbidden symbol is modified to
decrease the delay from the introduction of an error
to detection of the error. The arithmetic decoder is
modified for quicker detection by the introduction of
a look-ahead technique. The calculation of the MAP
metric is also modified for faster error detection.
Section 2 contains an overview of arithmetic cod-
ing and existing joint source-channel coding tech-
niques based on arithmetic coding, with particular
attention to the maximum a posteriori (MAP) esti-
mation approach by Grangetto et al. (2005). These
schemes have recently received greater attention in
the literature (Bi, Hoffman and Sayood, 2010). Sec-
tion 3 presents novel improvements to the MAP joint
source-channel coding scheme. Section 4 presents ex-
perimental results. Finally, Section 5 draws conclu-
sions.
2 ERROR CORRECTION OF
ARITHMETIC CODES
Arithmetic coding (Rissanen, 1976) is a method for
compressing a message u consisting of a sequence of
L symbols u
1
, u
2
, ··· , u
L
with different probability of
occurrence. Arithmetic coding requires a good source
model which describes the distribution of probabili-
ties for the input symbols. The source model can be
static or adaptive. In a static model, the distribution of
probabilities remains fixed throughout the message,
that is, it is the same when encoding the first sym-
bol and when encoding the last symbol. In an adap-
tive model, the probability distribution can be updated
from symbol to symbol, so the probability distribution
used to encode the last symbol may be different from
that used to encode the first symbol.
Arithmetic coding can be thought of as represent-
ing a message as a probability interval. At the start
of the encoding process, the interval is the half-open
interval [0, 1), that is, 0 ≤ x < 1. For each sym-
bol u
l
to be encoded, this interval is split into sub-
intervals with widths proportional to the probability
of each possible symbol, and the sub-interval corre-
sponding to the symbol u
l
is selected. This interval
gets progressively smaller, so to keep the interval rep-
resentable in computers, it is normalized continuously
(Witten, Neal and Cleary, 1987). When the interval is
small enough, bits are emitted by the encoder and the
interval is expanded.
2.1 Error Detection
Arithmetic coding can compress data optimally when
the source model is accurate. However, arithmetic
codes are extremely vulnerable to any errors that
occur (Lelewer and Hirschberg, 1987). Huffman
codes tend to be self-synchronizing, so errors tend
not to propagate very far; when an error occurs in
a Huffman-coded message, several codewords are
misinterpreted, but before long, the decoder is back
in synchronization with the encoder (Lelewer and
Hirschberg, 1987). Arithmetic coding, on the other
hand, has no ability to withstand errors.
Boyd et al. (1997) propose the introduction of
some redundancy in arithmetic codes. This is done
by forbidding a range from the interval. In common
arithmetic coding techniques, the coding interval is
doubled when required (Witten et al., 1987). Boyd
et al. suggest the interval to be reduced by a factor R
each time the interval is doubled, consequently for-
bidding part of the interval. When this redundancy
is introduced, errors can be detected by the decoder
when the decoding interval falls within a forbidden
part. The delay from the bit error to the detection of
an error is shown to be about 1/(1−R).
Instead of rescaling the interval for every normal-
ization interval doubling, Sayir (1999) suggests in-
troducing forbidden gaps in the interval. After each
source symbol is encoded, the source probabilities are
rescaled by a rescaling factor γ, such that on average,
−log
2
γ bits of redundancyare added for every source
symbol. The gap factor ε is defined to be ε = 1−γ.
2.2 MAP Decoding
The forbidden gap technique is a joint source-channel
method for detecting errors in arithmetic codes. To
perform error correction, we must first encode the
symbols with an encoder that introduces redundancy.
Suppose we have a message u consisting of a se-
quence of L symbols, u
1
, u
2
, ··· , u
L
. We encode this
into a bit sequence t, which has N bits, t
1
, t
2
, ··· , t
N
.
The bit sequence t is then transmitted over a noisy
channel, and the received signal is y. Figure 1 is a
block diagram of the encoding and decoding process.
The task of the decoder is to infer the message
ˆ
u given
the received signal y. If the inferred message
ˆ
u is not
identical to the source message u, a decoding error
has occurred.
Arithmetic
Encoder
u
Channel
t
ûy
Arithmetic
Decoder
Figure 1: Block diagram of the encoding and decoding pro-
cess.
SIGMAP 2010 - International Conference on Signal Processing and Multimedia Applications
6