enrich the set of samples serving for the construction
of the initial possibility distributions.
Each pixel from the analyzed image, I, is
assumed to belong to one, and only one, thematic
class from an exhaustive set of M predefined and
mutually exclusive classes Ω = {C
1
, C
2
, ..., C
M
}.
Prior knowledge is assumed to be given as an initial
set of learning areas extracted from the considered
image and characterizing the M considered classes
(from the expert point of view). Based on this prior
knowledge, M class probability density functions
are, first, estimated using the KDE (Kernel Density
Estimation) approach (Epanechnikov, 1969) and,
then, transformed into M initial possibility
distributions encoding the “expressed” expert
knowledge in a possibilistic framework. The
application of the M class possibility distributions on
the considered image I will lead to M possibilistic
maps PM
I,C
m
, m= 1, ..., M where (PM
I,C
m
encodes
the possibility degree of different image pixels to
belong to the thematic class C
m
). Based on the use of
a degree of confidence, the extraction of new
learning samples is conducted using possibilistic
spatial contextual information, i.e. applied on
different possibilistic maps. The extraction process
is then iteratively repeated until no more new sample
can be added to the incremental learning process.
The use of a possibilistic reasoning approach
increases the capacity as well as the flexibility to
deal with uncertainty when the available knowledge
is affected by different forms of imperfections:
imprecision, incompleteness, ambiguity, etc. Notice
that, even when the used prior knowledge is perfect,
the additional knowledge extracted through any
incremental process may be affected by different
forms of imperfection (Hüllermeier, 2003).
In the next section, a brief review of basic
concepts of possibility theory is introduced. The
proposed iterative approach will be detailed in the
third section. Sections 4 and 5 are devoted to the
experimental results obtained when the proposed
approach is applied using synthetic as well as real
images.
2 POSSIBILITY THEORY
Possibility theory was first introduced by Zadeh in
1978 as an extension of fuzzy sets and fuzzy logic
theory to express the intrinsic fuzziness of natural
languages as well as uncertain information (Zadeh,
1978). In the case where the available knowledge is
ambiguous and encoded as a membership function
into a fuzzy set defined over the decision set, the
possibility theory transforms each membership value
into a possibilistic interval of possibility and
necessity measures (Dubois and Prade, 1980).
2.1 Possibility Distribution
Let us consider an exclusive and exhaustive universe
of discourse Ω = {C
1
, C
2
,..., C
M
} formed by M
elements C
m
, m = 1, ..., M (e.g., thematic classes,
hypothesis, elementary decisions, etc).
Exclusiveness means that one and only one element
may occur at time, whereas, exhaustiveness refers to
the fact that the occurring element belongs to Ω. A
key feature of possibility theory is the concept of a
possibility distribution, denoted by , assigning to
each element C
m
a value from a bounded set
[0,1] (or a set of graded values). This value (C
m
)
encodes our state of knowledge, or belief, about the
real world representing the possibility degree for C
m
to be the unique occurring element.
2.2 Possibility and Necessity Measures
Based on the possibility distribution concept, two
dual set measures, the possibility Π and the necessity
Ν measures are derived. For every subset (or event)
A, these two measures are defined as follows:
m
C
m
() maxπ(C )
A
A
(1)
m
C
m
N( ) 1 ( ) min 1 π(C )
C
A
AA
(2)
where, A
c
denotes the complement of A.
2.3 Possibility Distributions Estimation
based on Pr- Transformation
A crucial step in possibility theory applications is
the determination of possibility distributions. Two
approaches are generally used for the estimation of a
possibility distribution. The first approach consists
on using standard forms predefined in the
framework of fuzzy set theory for membership
functions (i.e. triangular, Gaussian, trapezoidal,
etc.), and tuning the form parameters using a manual
or an automatic tuning method.
The second possibility distributions estimation
approach is based on the use of statistical data where
an uncertainty function (e.g. histogram, probability
distribution function, basic belief function, etc.); is
first estimated and then transformed into a
possibility distribution
IterativePossibilityDistributionsRefininginPixel-basedImagesClassificationFramework
177