This paper discusses the Bayes based TE
method. Bayesian inference is an approach to
statistics in which all forms of uncertainty are
expressed in terms of probability. It has a large body
of applications and is believed to be the most classic,
rigorous and popular method for modeling
uncertainty (Jain et al., 2000). Nevertheless,
Bayesian method has always been criticized for lack
of prior probability and being difficult to define the
conditional probability. From the viewpoint of
application, e.g. target recognition and TE, it is
usually very inconvenient to build and maintain the
knowledge database of the inference rule in form of
conditional probability. Practitioners complain that
whenever new inference rule is to be added to the
knowledge database, all former defined inference
rules have to be redefined to ensure the sum of
corresponding conditional probabilities maintains
one. We in this paper try to eliminate this problem
by reinterpretation of the Bayes theorem, which can
handle randomness and fuzziness simultaneously,
and leads to an open structure of knowledge
database for uncertainty inference.
The rest of this paper is organized as follows.
Section 2 presents two interpretations of conditional
probability, which are suitable for describing
randomness and fuzziness, respectively. Section 3
revisits the well known Bayesian theorem by
applying these two interpretations of conditional
probability and derives two forms of Bayesian
theorem, the usual one and the fuzzy Bayesian
theorem. Section 4 proposes a probability-possibility
conversion method through the bridge of Bayesian
theorem but with specific interpretations of
conditional probability. Section 5 introduce the
application of the fuzzy Bayesian theorem to the
problem of TE. Section 6 concludes the paper.
2 TWO INTERPRETATIONS
OF CONDITIONAL
PROBABILITY
The Bayesian theorem is a well-known mechanism
for relating two conditional probabilities. This
section gives two interpretations of conditional
probability, based on which the Bayesian theorem
can be reinterpreted as in the next section.
Probability originally comes with randomness while
possibility comes with fuzziness. Randomness is the
uncertainty whether an event occurs, or the possible
outcomes an event variable may take. Sometimes,
the event itself is certain and you may be uncertain
about it because of your lack of information of it.
Fuzziness is the uncertainty whether a concept holds
given its attribute values.
The chief similarity between probability and
possibility is that both methods describe uncertainty
with numbers in the unit interval [0, 1]. The key
distinction concerns how they deal simultaneously
with the outcome and its opposite of an event
variable. Probability demands the sum of all possible
outcomes of an event variable is one. Possibility has
no additivity constraint as probability.
Mathematically, a possibility on the finite set A is a
mapping to [0, 1] such that
() 0
(1)
( ) Max( ( )) 1, 1, 2,...,
i
AAain
(2)
where A is called event variable, and
i
Aa
is one
of n possible outcomes of event variable A (in short,
event). Without lose of generality, this work only
considers the case of discrete event to simplify the
discussion. As we can see, possibility is similar to
probability, but it relies on an axiom which only
involves the operation “maximality” as shown in (2).
In contrast, probability is additive which requires
that probability sum of all possible outcomes of
event variable is one. Though probability origins
from randomness or frequency, it has been widely
used in various applications for modeling different
uncertainty that satisfies the additively constraint of
probability. Likewise, possibility has been
extensively used for formulating any uncertainty that
satisfies (1, 2) besides fuzziness.
Conditional probability
(|)
ij
pA a B b
is the
occurrence probability of a conditional event
|
ij
aB b
, which equals to the probability of
i
Aa
given
b
. In order to completely
formulate the randomness of the conditional event
|
ij
aB b
, we need to use conditional
probability mass function (CPMF),
{(
i
pA a
|),
j
b
1, 2,..., }im
(in short,
(| )
pAB b
).
Here event variable B is fixed at
b
and m is the
number of possible outcomes
i
a
s of event variable
A. Now we see CPMF provides a complete
description of the stochastics of the event variable A
given conditioning event
b
. According to the
property of probability, the sum of
(| )
pAB b
across
i
a
is one. The randomness formulated by
CPMF is here called probabilistic randomness.
IJCCI2013-InternationalJointConferenceonComputationalIntelligence
228