
2.4 Code stream generation
In the H.264/AVC arithmetic coder, output bits are
released from the second MSB position of the base
register after each single renormalization shift. In
order to solve the problem of carry propagation,
ones, encountered in series, are counted without
outputting. Occurrence of the carry, indicated by the
MSB bit of the base register, activates releasing
binary one followed by a number of zeros.
Otherwise, the inverted version of such sequence
appears as an outcome after encountering a zero bit.
This procedure requires the use of a counter
signalling a total of outstanding bits. Its precision
should match a maximal possible code stream length
to prevent overflow when dealing with an extremely
long series. The produced bits are assembled into
bytes and released. It may happen that more then
one byte has to be output due to a large number of
outstanding bits. Provided the CABAC accepts one
symbol per clock cycle, the design must adjust this
rate to irregular code byte generation conditions by
inserting wait-states. The JPEG 2000 arithmetic
coder is free to that problem since at most two code
bytes can appear as an outcome after processing one
symbol. The algorithm imposes the need to keep the
last generated byte in the buffer ready to complete
carry. If there is the 0xFF byte, the control logic
inserts one stuffing bit into the MSB position of the
following byte. This bit assumes the zero value to
trap a carry. A dedicated down-counter points to bits
in the base register that have not been released so
far. In terms of higher performances, both
compression standards find a separate pipeline stage
to make the code stream generation adequate.
2.5 Bypass mode
The CABAC in H.264/AVC provides the bypass
mode, which, against the regular one, assumes
uniform probability distribution of submitted
symbols. Hence, it skips the probability state
updating routine. Since related symbols contribute to
the same code stream as in regular mode, it is natural
to use the same resources with their timing
constraints. The interval register remains unchanged
in bypass mode. This property, in conjunction with
skipping the probability adaptation, gives an
opportunity to process bypassed symbols and regular
ones in parallel. The probability estimates are
obtained by single shifting right (division by 2) the
interval value. Therefore, we must append one bit to
represent estimate accurately. In case of JPEG 2000,
the bypass mode forwards symbols directly to the
output stream without arithmetic encoding. As for
hardware, this approach allows increasing the
throughput to a rate determined by the bit-plane
coder performances, which submits input data to the
CABAC module. However, the total improvement is
not so significant since the standard enables the
bypass mode for some coding passes over lowest bit
planes.
3 EVALUATION
3.1 System-level constraints
Since the CABAC in H.264/AVC produces the
single code stream for an entire slice, all necessary
context-symbol data have to be applied to the one
functional block. Thus, its speed determines the
overall performances of the coder when input data
are received continuously. Lower bit-rates decrease
demands for throughput. Using rate-distortion
optimisation for each macro-block improves quality
at the same compression ratio. To obtain rates, we
need to carry out arithmetic coding (when used) for
all possible coding modes. As a consequence, it
burdens the CABAC with a large number of
computations and may lead to timing constraints for
the encoder. JPEG 2000 supports entropy-coding
parallelism by independent analysing rectangular
blocks of coefficients in the wavelet domain. Each
such code block generates a separate output stream,
which can be truncated in some points to increase
the compression ratio at the expense of quality
losses of the reconstructed image. Moreover, a
special mode drives the arithmetic coder to terminate
the stream on these points. For the sake of the rate
control policy, it is desired to produce more
outcomes to discard their less significant parts with
reference to the optimisation criteria. Thus, we need
faster CABAC engines to support this property.
3.2 Evaluation conditions
Evaluations have been conducted for some video
sequences taking into account the number of binary
symbols submitted to the arithmetic coder in both
standards. We examined test cases relating to CIF
and QCIF resolutions. As reference software for
image compression schema, we have employed
JJ2000 version 5.1 adapted to support video material
as Motion JPEG 2000 (ISO/IEC 15444-3, 2002). To
get characteristics following options have been used:
no tiling, five levels of wavelet decompositions, 9/7
wavelet filter, code block size of 64 x 64 samples,
regular coding mode, single quality layer. Explicit
quantization by step size has enabled to vary both
HARDWARE-ORIENTED ANALYSIS OF THE ARITHMETIC CODING – Comparative study of JPEG2000 and
H.264/AVC compression standards
313