by just a few wavelet expansion coefficients.
Wavelet analysis produces several important
benefits, particularly for image compression. First,
an unconditional basis causes the size of the
expansion coefficients to drop off with j and k for
many signals. Since wavelet expansion also allows a
more accurate local description and separation of
signal characteristics, the DWT is very efficient for
compression. Secondly, a great variety of different
wavelet types provides flexibility to design wavelets
to fit individual applications. The LL band at the
highest level can be classified as most important,
and the other 'detail' bands can be classified as of
lesser importance, with the degree of importance
decreasing from the top of the pyramid to the bands
at the bottom (figure 2) (Conci et al., 2008).
The wavelets denoising main aspect is to
distinguish between low and high energy regions,
and modify the coefficients using an adaptive
thresholding strategy. When noise is added to the
image, it tends to increase the magnitude of the
wavelet coefficients on average. Specifically, in the
regions where one expects the coefficients to be
dominated by noise, consequently most of these
coefficients should be removed since the noise is
highly noticeable here. In regions of sharp
variations, the coefficients have a lot of energy due
to the signal, and some due to noise (which is not as
visible in these regions), thus they should be kept to
ensure that signal details are retained.
The wavelet thresholding approach removes
noise from wavelet coefficients of the detail, while
keeping the lowest resolution coefficients unaltered.
It filters each noisy wavelet coefficient, from the
detail sub-band, with a non linear thresholding
function. The problem is to estimate correctly the
threshold value in order to obtain good performance.
Statistical approaches have been addressed for
wavelet-based denoising (or thresholding). Wavelet
hard threshold has been proposed (Donoho and
Johnstone, 1994). As an alternative, a denoising
algorithm via soft-thresholding has been developed
to remove noise from wavelet coefficients (Donoho,
1995). Many variants and improvements of these
works have appeared in the literature. It has been
shown that optimal thresholding can be carried out
based on the ratio between noise and original signal
variance at each decomposition sub-band. The
T
value can then be well approximated by
:
(Ruggeri. and Vidakovic, 1998). This alternative is
implemented using the Wavelet Toolbox
(Mathworks, 2001) for Matlab and used in our
experiments to exploit the relation of the results to
the image type and wavelet family.
3 EXPERIMENTAL RESULTS
The JPEG-2000 standard (selected in 2001 for
inclusion in the DICOM standards) is based on the
discrete wavelet transform using the Daubechies
(9,7) biorthogonal wavelet, also named the Cohen-
Daubechies-Feauveau 9/7 wavelet (Daubechies,
1992). Moreover, a coding denoising procedure
based on a thresholding function has been integrated
to JPEG2000 part II standard which is designed to
support a variety of applications, including the
compression and transmission of medical images
(Stahl et al. 2000). But is this the best choice
concerning quality at any time? In this work we
compared the results from 36 different variations of
wavelets compression schemes to explore their
differences related to image content and quality.
Two grouped images with different levels of
complexities and content are used to evaluate the
relation among fidelity, image content and noise
level. The contents consist of
humans, landscape,
things, textual information and synthetic objects.
The first group is formed by the natural images:
Lena, Cameraman, Goldhill, and Peppers. The
second group is formed by the synthetic images:
Circle, Checkerboard, Sinusoidal Gray Level and
Text. These images are used on three resolutions
(128x128, 256x256 and 512x512). They are tested
without noise and altered by Additive White
Gaussian Noise (AWGN) with three noise levels:
5
. They are
reconstructed after compression and denoise by
thresholding. To choose these images as samples we
observed four aspects: the main motive, the number
of elements, the richness or simplicity of the
background. The performance of each approach is
evaluated by fidelity comparing the original versus
the same images after compression/denoising and
decompression. Figures 3 and 4 show the first and
third noise versions of these images, respectively. It
should be noted that only two of these images are
binary with well defined boundaries (Circle and
Checkerboard). Comparing figure 3 and 4 you see
that they are extremely sensitive to noise.
Performance analysis was done using three objective
evaluation criteria: Root Mean Square Error
(RMSE), Signal to Noise Ratio (SNR) and Peak
Signal to Noise Ratio (PSNR).
Small RMSE means
better results: the denoised image is close to the
original. High values for SNR means lower error and
this translates to a high value of PSNR. The main
drawback of using RMSE and SNR as a measure of
image quality is that in many instances these values
do not match the quality perceived by the human
σσ
2
n
T =
CONSIDERING THE WAVELET TYPE AND CONTENTS ON THE COMPRESSION-DECOMPRESSION
ASSOCIATED WITH IMPROVEMENT OF BLURRED IMAGES
81