with their scheme providing additional gains. The
results are provided for selected bit rates.
Our experiments were motivated by the need for
more throughout, than reported above, evaluation of
practical difference of coding with wavelets and
contourlets. The objectives were defined as:
compare wavelets and contourlets side by
side, as much as possible;
provide detailed analysis of scalability, in the
form of partial reconstruction of the coded
stream;
provide analysis of bit stream perturbations
(consistent with network errors, i.e. streaming
in wireless network);
use a range of error metrics, for qualitative
evaluation of introduced errors.
Our findings are:
the wavelet-SPIHT method works better when
the near-lossless operations are demanded
(smaller sequences sizes), whereas when a
given level of quality loss is acceptable, the
contourlet method gives better image quality
for smaller sequence size;
it is possible to parameterize the SPIHT algorithm
to obtain images at definite quality level;
when certain level of image quality loss is
accepted, some subsequences (refinement
parts) of the output sequence can be treated as
low-priority data.
This work is organized as follows: second
section presents our approach and experimental
framework, third section presents experiment results,
last section presents conclusions.
2 THE METHOD
We investigate a coding systems consisting of two
elements: image transform and bit coder. For
transform, we use interchangeably wavelet
transform (implemented as standard 9/7 filter bank,
as in (JPEG, 2000)) and contourlet transform
(implemented with surfacelet transform, using (Lu
and Do, 2007)). For bit coding, we use SPIHT
algorithm (Said and Pearlman, 1996). The usage is
motivated by good results reported, but also by
difficulties with extending other algorithms, like
EBCOT, to contourlets (Głomb et al, 2007). The
implementation was done in C++.
We use a number of metrics to measure image
distortion. Mean square error (MSE) and Peak signal
to noise ratio (PSNR) are defined as:
SE
k
PSNR
2
10
log10=
(1)
where:
∑∑
==
−
⋅
=
N
i
M
j
jifjif
MN
MSE
11
2'
))],(),(([
1
(2)
k – number of image colors minus 1;
N,M – sizes of image;
f(i,j) – input image;
f
’
(i,j) – output image.
We also use average per pixel error (denoted
PERPX), and measures related to edge degradation.
The latter are included as it has been observed (Al-
Otum, 1998) that edge degradation in video coding
is an important component of human quality
perception. For measurement of the latter, we first
use a Sobel edge operator for reference and distorted
image, then measure MSE and NCC (normalized
cross correlation) of the edge images.
The aim of work was to evaluate the image
quality as a function of decoded sequence size and to
determine the resistance of the output, linear
sequence to errors.
3 EXPERIMENTAL RESULTS
Within the experiments we use the typical set of
images (Baboon, Barbara, Boat, Goldhill, Lena,
Peppers), universally used in digital images studies.
3.1 Image Quality as a Function of the
Decoded Sequence Size
The SPIHT algorithm codes more important
coefficient bits first. While decoding, image quality
increases progressively. Because of this, the graph of
image quality as a function of the size of decoded
sequence is nonlinear.
During experiments the process of decoding
output sequence was being stopped at specific point,
expressed in percent of the whole sequence size. The
single steep was 2[%].
By comparing output images to the original ones,
the qualities of output images were evaluated and
the graph of image quality was obtained. Each graph
expresses the average results of each measurement
method for the whole set of images.
The size of output sequences for contourlet and
wavelet decompositions were different. The sizes for
contourlet method sequences are considerably
larger. We calculated the value γ (3).
IMAGAPP 2009 - International Conference on Imaging Theory and Applications
38