
The testing image can then be segmented. The size
of the edge texture patch is deterministic as the edge
along certain direction will not occupy large number
of pixel cells. The pattern can then be learnt and clas-
sified as in intensity analysis. The texture patch is
used instead of intensity of single pixel, such that the
pattern itself is not susceptible to the noise.
2 SYSTEM ARCHITECTURE
The whole system performs analysis in 2 phases,
namely learning phase and testing phase. During the
learning phase, both the image and the edge infor-
mation will be analysed by the system. The system
will extract texture feature from the image. The rep-
resentation of texture feature will be discussed in Sec-
tion 3. Such feature will be clustered into groups. The
groups is then further classified into edge or non-edge
texture according to the edge information given. The
association between the texture feature and the final
group is then established. The clustering algorithm
will be explored in Section 4. During testing phase,
the texture feature of testing image will be extracted.
Finally, classification can be done by comparing the
cluster means and the model energy and by applying
the cluster association rule. Based on the classifica-
tion result, foreground can be extracted. The details
will be given in Section 5. The flow is summarized in
figure 1.
Figure 1: Logic flow of the proposed system.
3 TEXTURE ANALYSIS BY MRF
Markov Random Field was first developed for texture
analysis, e.g. (Cross and Jain, 1983). It can be used to
describe a texture and make prediction on the inten-
sity value of a certain pixel given the intensity value
of its neighborhood. The theories related to Markov
Random Field can be found in (Chellappa and Jain,
1993).
In Markov Random Field, the neighborhood is
defined as clique elements. Consider that S =
{s
1
, s
2
, .., s
P
} is a set of pixels inside the image, and
N = {N
s
|s ∈ S} is the neighborhoods of the set
of pixels. In the system, the neighborhoods are the 8
pixels that with chessboard distance 1 away from the
target pixel.
Assuming X = {x
s
|s ∈ S} is the random vari-
ables (the intensity value) for every pixel inside an
image, where x
s
∈ L and L = {0, 1, .., 255}. Be-
sides, we have a class set for texture pattern, Ω =
{ω
S
1
, ω
S
2
, ..., ω
S
P
} where ω
S
i
∈ M and M is the set
of available classes. In the proposed system, we have
only two classes, the edge and the non-edge classes.
In Markov chain analysis, the conditional prob-
ability of certain pixel being certain class is given
by Gibbs distribution according to Hammersley-
Clifford theorem. The density function is π(ω) =
1
P
ω
exp(
−U (ω)
T
)
exp(
−U(ω)
T
), where T is the temper-
ature constant, which is used in stimulated anneal-
ing. The energy term can be further represented as
U(ω, x
i
) = V
1
(ω, x
i
) +
P
i
0
∈N
i
β
i,i
0
δ(x
i
, x
i
0
),where
V
1
(ω, x
i
) represents the potential for pixel with cer-
tain intensity value belongs to certain class and the
δ(x
i
, x
i
0
) is the normalised correlation between pixel
at s
i
and those at s
i
0
.
When the texture is being learnt by the feature
learning module, the set of β
i,i
0
is estimated accord-
ing to the requirement that the probability of its as-
sociated texture class will be maximised. The esti-
mation algorithm used in the system is simulated an-
nealing. The set of β
i,i
0
corresponds to the correlation
value and thus represents the configuration of the pix-
els such that it can be classified as that texture class.
In the system, this set of estimated β will be used as
texture feature vector. It will be used as input of sup-
port vector machine such that the association between
texture feature and texture class can be formed.
4 TEXTURE CLUSTERING
USING K-MEAN ANALYSIS
K-mean clustering algorithm has been widely used in
application that need unsupervised classification. Al-
though there is a learning set in the proposed system,
the noise in the image will greatly reduce the relia-
bility of the learning set. To identify those outliers in
the learning set, unsupervised clustering will be per-
formed first. The implementation of k-mean cluster-
ing algorithm can be found in (Duda et al., 2000).
After performing k-mean clustering on the feature
vectors β, supervised association will be done. Given
several clusters after k-mean clustering, some of them
correspond to a edge patch with gradient change a
ROBUST IMAGE SEGMENTATION BY TEXTURE SENSITIVE SNAKE UNDER LOW CONTRAST
ENVIRONMENT
431