A NEURAL NETWORK FRAMEWORK FOR IMPLEMENTING
THE BAYESIAN LEARNING
Luminita State
University of Pitesti, Caderea Bastiliei #45, Bucharest #1, Romania
Catalina Cocianu, Viorica Stefanescu
Academy of Economic Studies, Calea Dorobantilor 15-17, Bucharest #1, Romania
Vlamos Panayiotis
Hellenic Open University, Greece
Keywords: Neural Networks, Competitive Learning, Hidden Markov Models, Pattern Recognition, Bayesian Learning,
Weighting Processes, Markov Chains.
Abstract: The research reported in the paper aims the development of a suitable neural architecture for implementing
the Bayesian procedure for solving pattern recognition problems. The proposed neural system is based on an
inhibitive competition installed among the hidden neurons of the computation layer. The local memories of
the hidden neurons are computed adaptively according to an estimation model of the parameters of the
Bayesian classifier. Also, the paper reports a series of qualitative attempts in analyzing the behavior of a
new learning procedure of the parameters an HMM by modeling different types of stochastic dependencies
on the space of states corresponding to the underlying finite automaton. The approach aims the development
of some new methods in processing image and speech signals in solving pattern recognition problems.
Basically, the attempts are stated in terms of weighting processes and deterministic/non deterministic
Bayesian procedures. The aims were mainly to derive asymptotical conclusions concerning the performance
of the proposed estimation techniques in approximating the ideal Bayesian procedure. The proposed
methodology adopts the standard assumptions on the conditional independence properties of the involved
stochastic processes.
1 HMM IN BAYESIAN LEARNING
Stochastic models represent a very promising
approach to temporal pattern recognition. An
important class of the stochastic models is based on
Markovian state transition, two of the typical
examples being the Markov model (MM) and the
Hidden Markov Model (HMM).
The latent structure of observable phenomenon is
modeled in terms of a finite automaton Q, the
observable variable being thought as the output
produced by the states of Q. Both evolutions, in the
spaces of non observable as well as in the space of
observable variables, are assumed to be governed by
probabilistic laws.
In the sequel, we denote by
()
0n
n
≥
the stochastic
process describing the hidden evolution and by
)
0n
n
X
≥
the stochastic process corresponding to the
observable evolution.
Let Q be the set of states of the underlying finite
automaton;
mQ =
. We denote by
n
the
probability distribution on Q at the moment n. Let
)
P,,
be a probability space,
()
,C,ℵ be a
measure space, where
is a
-finite measure. The
output of each state
Qq
is represented by the
random element
ℵ→
:X of density function
)
.f
q
. Let
be the apriori probability distribution
on Q. We assume that
()
0q,Qq ≠∈∀
. The
326
State L., Cocianu C., Stefanescu V. and Panayiotis V. (2004).
A NEURAL NETWORK FRAMEWORK FOR IMPLEMENTING THE BAYESIAN LEARNING.
In Proceedings of the First International Conference on Informatics in Control, Automation and Robotics, pages 328-331
DOI: 10.5220/0001127503280331
Copyright
c
SciTePress