nite differences so many contours of small objects
or small structures are preserved. In (Alvarez et al.,
1992), diffusion is isotropic on homogenous regions
but decreases and becomes anisotropic near bound-
aries. Gaussian filtering is used for gradient estima-
tion, so the control of the diffusion is more robust
to noise. Nevertheless, it remains difficult to distin-
guish between noise, texture and small objects that
need to be preserved by the diffusionprocess. In color
image restoration, several restoration models exist
(Sapiro and Ringach, 1996) (Blomgren and Chan,
1998) (Tschumperl´e and Deriche, 2001). These mod-
els make use of color gradient norms (Di Zenzo,
1986) in order to control the diffusionat corner points.
The three color channels should not be diffused inde-
pendently in order not to lose the coupled diffusion
(for example in(Blomgren and Chan, 1998)).
In this paper, we present a rotating filter (devel-
opped by (Montesinos and Magnier, 2010)) able to
detect textures in vector images. Then, we introduce
a new method which controls accurately the diffu-
sion near edges and corner points. In particular, our
detector provides two different directions on edges,
thus preserving corners. These informations allow an
anisotropic diffusion in these directions contrary to
(Alvarez et al., 1992) and (Tschumperl´e and Deriche,
2001) where only one direction was considered.
We first present in Section 2 our rotating smooth-
ing filter. A new pixel classification using a bank
of filtered images is introduced in Section 3. Our
anisotropic diffusion scheme is introduced in Section
4, we extend the anisotropic diffusion for color im-
ages in Section 5. We discuss our method in Section
6. Section 7 is devoted to experimental results and
Section 8 concludes this paper.
2 ROTATING FILTER
In our method, for each pixel of the original image,
we use rotating filters in order to build a signal s
which is a function of a rotation angle θ and the un-
derlying signal. Smoothing with rotating filters means
that the image is smoothed with a bank of rotated
anisotropic Gaussian kernels:
G
θ
(x,y) = C.H
P
θ
x
y
e
−
x y
P
−1
θ
1
2 λ
2
1
0
0
1
2λ
2
2
P
θ
x
y
!
where C is a normalization coefficient, P
θ
a rotation
matrix of angle θ, x and y are pixel coordinates and λ
1
and λ
2
the standard-deviations of the Gaussian filter.
As we need only the causal part of the filter (illus-
trated on figure 2(a)), we simply “cut” the smoothing
kernel by the middle, this operation correspondsto the
Heaviside function H. By convolution with these ro-
tated kernels (see figure 2(b)), we obtain a collection
of directional smoothed images I
θ
= I ∗G
θ
.
For computational efficiency, we proceed in a first
step to rotate the image at some discretized orienta-
tions from 0 to 360 degrees (of ∆θ = 1, 2, 5, or 10
degrees, depending on the angular precision needed
and the smoothing parameters) before applying non
rotated smoothing filters with λ
1
and λ
2
the standard-
deviations of the Gaussian filter (illustrated on figure
2(a)). As the image is rotated instead of the filters,
the filtering implementation is quite straightforward
(Deriche, 1993) (Montesinos and Magnier, 2010). In
a second step, we apply an inverse rotation of the
smoothed image and obtain a bank of 360/∆θ images.
Y
X
1
2
_
λ
λ
1
2
(a) Smoothing filter
θ
(b) Rotating filters
Figure 2: A smoothing rotating filter.
3 PIXEL CLASSIFICATION
In the following the image will be represented as a
function defined as:
I(x
1
,x
2
) : IR
2
→ IR
d
The case where d = 1 corresponds to grey level
images, the case d = 3 corresponds to color images.
3.1 Pixel Signals
In this subsection we will consider the case d = 1.
Applying the rotating filter at one point of an image
and making a 360 scan provides to each pixel a char-
acterizing signal. The pixel signal is a single function
s(θ) of the orientation angle θ. Figure 4 is an exam-
ple of s-functions measured at 8 points located on the
image of figure 3. Each plot of figure 4 represents
in polar coordinates the function s(θ) of a particular
point. From these pixel signals, we now extract the
descriptors that will discriminate edges and regions.
In the case of a pixel in a homogeneous region,
s(θ) will be constant (see figure 4 point 2). On the
contrary, in a textured region, s(θ) will be stochastic
TEXTURE REMOVAL IN COLOR IMAGES BY ANISOTROPIC DIFFUSION
41