Figure 2: Detection of the regions’ nature. Column 1 : orig-
inal images; column 2: detection result (textured regions are
in black).
ture features, namely: the first four FOS (first order
statistics) (mean, variance, skewness, kurtosis), and a
set of 19 texture features obtained after reduction of
30 features by the method described in (Rosenberger
and Cariou, 2001).
2.2.2 Adaptive Classification
In a way to obtain an automatic and segmentation sys-
tem, we have chosen to perform the segmentation via
an unsupervised classification approach. For this, we
have selected three classifiers, namely the classical k-
means (MacQueen, 1967), the fuzzy c-means (FCM,
(Bezdek, 1981)), and a modified version of the Linde-
Buzo-Gray classifier (LBG, (Linde et al., 1980)) de-
scribed in (Rosenberger and Chehdi, 2003).
The choice of these techniques is motivated by
their good behavior for the unsupervised classifica-
tion of large datasets, which is interesting for instance
in multispectral image segmentation. To simplify our
system, we have used only the FCM algorithm to clas-
sify pixels which belong to the uniformed regions pre-
viously detected.
For the textured regions, we have chosen to set up
a competition between the three retained classifiers.
This means that the pixels which belong to the tex-
tured regions are classified in parallel with the three
algorithms, providing three different classification re-
sults and corresponding segmentations. The resulting
partitions are then analyzed to keep the most coherent
ones, through an assessment procedure.
2.2.3 Assessment of Classification Results
The assessment of a classification result requires to
set up a measure of the coherence of the result. In
our system, we have adopted the measure of the intra-
class disparity presented in (Rosenberger and Chehdi,
2003) as a coherence measure. We have experimented
this step by using a set of 10 synthetic images (with
ground truth) similar to those presented in Figure 2.
More precisely, we have computed the correct classi-
fication rate and the corresponding assessment index
obtained after processing the textured regions. The
magnitudes of the correlation ratios between the two
variables (FCM: 0.52 ; k-means: 0.78 ; LBG: 0.85)
are enough high to motivate the use of the intra-class
disparity as a measure of the validity of the clusters
provided by the classification method.
2.2.4 Fusion of Parallel Segmentation Results
Fusion is an important task in our system in that it
must take into account the most reliable among inter-
mediary results. Many fusion methods can be consid-
ered (Bloch, 2003), but they generally require some
prior knowledge or information which may not be
available in practice to the user.
In this work, we introduce a fusion method for tex-
tured regions for which competitive classifications are
set up. The fusion is based upon the assessment of
the clusters derived from the previous step, and which
is very simple to implement. Indeed, for every pixel
within textured regions, the output classification is
taken as the result of the classification method which
provided the best assessment index (i.e. the lowest
intra-class disparity) among the three classification
results (given by k-means, FCM, and LBG). Next, the
fusion between the uniform and textured regions is
performed by simply mapping the corresponding seg-
mentations into a final result.
In the case of multispectral images, the fusion of
the classification results obtained for each spectral
band is reported in a final segmentation in a similar
way, by accounting for the assessment index available
for every region in each band.
3 EXPERIMENTAL RESULTS
To validate our approach, we have used three syn-
thetic images from the image database described
above, and remote sensing images acquired by a
CASI multispectral sensor. In the case of synthetic
images, Table 1 gives the mean rate of correct regions’
nature detection (RND) as well as the final classifica-
tion mean rate obtained with such a prior detection.
These results show the relevance and the efficiency
of our approach of prior identification of the regions’
nature when compared to the blind approach, i.e. the
use of the same classifier (here the FCM) to segment
the whole image. Figure 3 depicts the segmentation
results obtained for a 3-bands CASI image. In this
case, the RND and the different adapted classifica-
SIGMAP 2007 - International Conference on Signal Processing and Multimedia Applications
206