
Next subsections describe with more detail each step
of the co/decoding procedures.
2.1 Step 1: Prediction and
Differential Block
The DPCM step is based on the idea of reducing the
entropy of the source by coding the difference
between the current value and a prediction of the
value itself. In our approach the prediction function
is performed with 2-dimensional vectors obtained
applying VQ. In particular, let (V
i
, V
j
) be the vector
to be coded. The second value of the previous vector
(V
i-1
, V
j-1
) is used to build a vector of two identical
components (V
j-1
, V
j-1
) which is the predictor for (V
i
,
V
j
). This strategy has been chosen because V
j-1
is
spatially closer to (V
i
,V
j
) than V
i-1
and usually closer
samples are statistically more correlated. The error
vectors (e
i
, e
j
) are computed as the difference
between the vector to be coded and the prediction
vector:
(e
i
, e
j
) =(V
i
,V
j
)-(V
j-1
,V
j-1
)= (V
i
-V
j-1
, V
j
-V
j-1
).
A typical error distribution for this kind of
prediction scheme is showed in
Figure 1.
Figure 2: DPCM error spreading.
Observe that a very high percentage of values falls
near the origin, so the used vector quantizer has been
modelled optimizing this distribution.
Note that the prediction function uses, as input, the
restored values that are used as input for the
predictor block in the decompressor. In this way no
errors propagation is present in the compression-
decompression system (
Figure 1 and Figure 7).
2.2 Step 2: Vector Mapping
In the prediction error distribution described above
is evident an odd symmetry. It can be exploited to
reduce the size of the table used for the VQ. In fact
the vectors falling in the third and in the fourth
quadrant can be mapped in the first and in the
second one, so just the two upper quadrants are to be
quantized.
This task is performed by the “Vector Mapping”
block: it checks whether the input vector falls in the
upper part of the diagram or not. In the first case no
changes occur, while, in the other case, the sign of
the values is changed as shown in the Figure 3.
One bit is used in the compressed code in order to
take into account such mapping (see Paragraph 2.4
for details).
Figure 3: Vector Mapping.
2.3 Step 3: Vector Quantization
Given a vector (X
1
,...,X
n
) of size N, basic concept of
Vector Quantization (Gray et alii, 1998) can be
described geometrically. The associated binary
representation can be seen as a set of N coordinates
locating a unique point in the N-dimensional space.
The quantization is performed partitioning the space
with N-dimensional cells (e.g. hyperspheres or
hypercubes) with no gaps and no overlaps. As the
point defined by the input vector falls in one of these
cells, the quantization process returns a single vector
associated with the selected cell. Finally, such vector
is mapped to a unique binary representation, which is
the actual output of the vector quantizer. This binary
representation (code) can have fixed or variable
length.
A vector quantizer is said to be “uniform” if the same
quantization step is applied to each vector element,
so that the N-dimensional space is divided into
regular cells. If the space is partitioned into regions
of different size, corresponding to different
quantization step, the quantizer is called “not-
uniform”. The “target” vector is called “codevector”
and the set of all codevectors is the “codebook”. A
grayscale 10 bit image is described by 2-dimensional
vectors of brightness values falling into the range [0,
1023]. In the proposed model, each codevector
e
i
e
j
BAYER PATTERN COMPRESSION BY PREDICTION ERRORS VECTOR QUANTIZATION
327