usual image retrieval systems, in such a way that it is
formed by two main parts, the "Feature Extraction"
and the "Search and Retrieval".
In retrieval system, after receiving the query
image, its features are extracted by the "Feature
Extraction" part and the feature vector is compared
by entire database using a similarity measure, so the
similarity of feature vectors of query image with all
the database images is calculated. At the end, k
nearest images to the query image is returned.
2.1 Feature Extraction Unit
The “Feature Extraction” unit is a key part in image
database systems. However, depending on the
method used and the application field, various
features can be extracted.
2.2 Color Feature Extraction
For color feature extraction, we should first choose a
suitable color model. For this work, we choose Lab
color model because of its perceptual uniformity
(have an equivalent distance in the color space
corresponds to equivalent differences in color). In
this method, at first the image is divided to some
equal size blocks. Then from each block in the Lab
space some features are extracted. For image
blocking the most important issue is block size. For
finding appropriate block size many simulations
were carried out. Consequently, a 1010 grid placed
over each image. In each block of color, three color
moments are computed per channel (9 moments).
These moments are chosen because they are very
efficient for quick search in image retrieval systems
and also they are scale and rotation invariant.
Before putting these moment values in a
histogram we normalized them by using (1).
X=
(1)
Where X
and X
are maximum and minimum
between all values.
We used three 4-d histograms in such a way that
each histogram includes the moments of L, a, and b
channel. By doing this the spatial relation between
these values is preserved in each pixel thus the
relative quality of results improves.
2.3 Texture Feature Extraction
For taking advantage of both global and local
characteristic of image, we use two methods for
texture feature extraction.
For global texture we use Tamura texture
features (Tamura et al., 1978). Tamura textures are
six features which correspond to human visual
perception: coarseness, contrast, directionality, line-
likeness, regularity, and roughness. From
experiments to test the importance of these features
with respect to human perception, it was derived that
the first three features are very significant, and the
last three features are correlated to them and does
not make much improvement in the results
(Bergman, 2002). So, in proposed work we use
coarseness, contrast, and directionality. We extract
these features from each image and normalize them
using (1). Finally a 3-dimension feature vector is
generated for each image in the database and these
vectors are compared using the Euclidean distance.
For local texture features we use Gabor filter.
Gabor filters have been widely used for Texture
analysis (Jain and Farrokhnia, 1991); (Daugman,
1988). Here we use mean and standard deviation
descriptors derived from Gabor features. We extract
Gabor features in four different orientations and four
different scales that leading us to 32 values.
But prior to this it was necessary to divide the
image to blocks. Unlike the color characteristics that
square blocks were the best option available for
them, this kind of blocks is not suitable for texture
modelling. Rectangular blocks was a good choice
because in many images, especially natural ones the
rectangular strips were detected. Thus we segment
each image to 20 rectangular horizontal blocks, and
20 rectangular vertical blocks. The blocks width is
equal to 16 and its length is equal to the length and
width of an image, respectively, for horizontal and
vertical blocks. So, from each image 40*32=1280
values is extracted. We normalize these values using
(1).
3 THE ADAPTIVE FUZZY
MODEL
An issue that attracts our attention was that all
extracted strips do not have equal weights. In other
words, our beliefs in the importance of the various
strips are different. So this issue encouraged us to
use fuzzy logic to model this part of the work.
Generally in each image, the most important data
is concentrated in the centre of the image and as we
move away from the centre of the image the
importance of the regions decreases, hence the
significance of the strips will flowingly decrease. To
model this complexity we define two membership
functions (MFs) for each image, one on the x-axis
for vertical blocks and the other on the y-axis for
ANovelAdaptiveFuzzyModelforImageRetrieval
299