is found in one and only one of N possible states and
declares a certain value at the output. Furthermore,
an HMM has two associated stochastic processes:
one hidden associate with the probability of
transition between states (non observable directly);
and another observable one, an associate with the
probability of obtaining each of the possible values
at the output, and this depends on the state in which
the system has been found (Rabiner and Juang,
1993). It has been used a Discrete HMM (DHMM),
which is defined by (Rabiner and Juang, 1993) and
(Rabiner, 1989);N is the number of states,
• M is the number of different observations,
• A(N,N) is the transition probabilities matrix
from one state to another,
• π(N,1) is the vector of probabilities that the
system begins in one state or another;
• and B(N,M) is the probabilities matrix for
each of the possible states of each of the
possible observations being produced.
We have worked with an HMM called "left to right"
or Bakis, which is particularly appropriate for
sequences. These “left to right” HMM’s turn out to
be especially appropriate for signature edge because
the transition through the states is produced in a
single direction, and therefore, it always advances
during the transition of its states. This provides for
this type of model the ability to keep a certain order
with respect to the observations produced where the
temporary distance among the most representative
changes. Finally, it has been worked from 40 to 80
states and 32 symbols per state.
In the DHMM approach, the conventional
technique for quantifying features is applied. For
each input vector, the quantifier takes the decision
about which is the most convenient value from the
information of the previous input vector. To avoid
taking a software decision, a fixed decision on the
value quantified is made. In order to expand the
possible values that the quantifier is going to
acquire, multi-labelling is used, so that the possible
quantified values are controlled by varying this
parameter. The number of labels in the DHMM is
related to values that can be taken from the number
of symbols per state.
DHMM algorithms should be generalized to be
adjusted to the output multi-labelling ({v
k
}k=1,..C),
to generate the output vector ({w(x
t
,v
k
)}k=1,..,C).
Therefore, for a given state j of DHMM, the
probability that a vector x
t
is observed in the instant
t, can be written as;
∑
=
=
C
k
jkttj
kbvxwxb
1
)(),()(
(3)
where b
j
(k) is the output discrete probability,
associated with the value v
k
and the state j; being C
the size of the vector values codebook.
5.2 Neural Networks
In recent years several classification systems have
been implemented using classifying techniques, such
as Neural Networks. The widely used Neural
Networks techniques are much known on
applications of pattern recognition.
The Perceptron of a simple layer establishes its
correspondence with a rule of discrimination
between classes, based on the lineal discriminator.
However, it is possible to define discriminations for
not lineally separable classes using multilayer
Perceptrons that are networks without refreshing
(feed-forward) with one or more layers of nodes
between the input layer and exit layer. These
additional layers contain hidden neurons or nodes,
are directly connected to the input and output layer
(Bishop, 1995).
A neural network multilayer perceptron (NN-
MLP) of three layers is shown on figure 2 and
implemented, with two layers of hidden neurons.
Each neuron is associated with weights and biases.
These weights and biases are set to each connections
of the network and, are obtained from training in
order to make their values suitable for the
classification task between the different classes. We
have used a back-propagation algorithm in order to
train the classification system.
Figure 2: Multilayer Perceptron.
BIOSIGNALS 2010 - International Conference on Bio-inspired Systems and Signal Processing
370