artificial neural networks, SOMs operate in two
modes: training and mapping. Training builds the
map using input examples. It is a competitive
process, also called vector quantization. Mapping
automatically classifies a new input vector.
The goal of learning in the self-organizing map is
to cause different parts of the network to respond
similarly to certain input patterns. This is partly
motivated by how visual, auditory or other sensory
information is handled in separate parts of the
cerebral cortex in the human brain (Kohonen, 1982;
2000).
The training utilizes competitive learning. When
a training example is fed to the network, its
Euclidean distance to all weight vectors is
computed. The neuron with weight vector most
similar to the input is called the best matching unit
(BMU). The weights of the BMU and neurons close
to it in the SOM lattice are adjusted towards the
input vector. The magnitude of the change decreases
with time and with distance from the BMU. The
update formula for a neuron with weight vector
Wv(t) is:
Wv(t + 1) = Wv(t) + Θ (v, t)*α(t)*(D(t) - Wv(t))
where α(t) is a monotonically decreasing learning
coefficient and D(t) is the input vector. The
neighborhood function Θ (v, t) depends on the lattice
distance between the BMU and neuron v. In the
simplest form it is one for all neurons close enough
to BMU and zero for others, but a gaussian function
is a common choice, too. Regardless of the
functional form, the neighborhood function shrinks
with time. At the beginning when the neighborhood
is broad, the self-organizing takes place on the
global scale. When the neighborhood has shrunk to
just a couple of neurons the weights are converging
to local estimates.
This process is repeated for each input vector for
a (usually large) number of cycles λ. The network
winds up associating output nodes with groups or
patterns in the input data set. If these patterns can be
named, the names can be attached to the associated
nodes in the trained net.
During mapping, there will be one single
winning neuron: the neuron whose weight vector lies
closest to the input vector. This can be simply
determined by calculating the Euclidean distance
between input vector and weight vector.
While representing input data as vectors has been
emphasized in this article, it should be noted that
any kind of object which can be represented digitally
and which has an appropriate distance measure
associated with it and in which the necessary
operations for training are possible can be used to
construct a self-organizing map. This includes
matrices, continuous functions or even other self-
organizing maps.
The obtained lithofacies classification is
presented in figure 7b.
4 MULTILAYER PERCEPTRON
The employed Neural Network type is a standard
layered Neural Network type with a linear
accumulation and a sigmoid transfer function, called
multi-layer perceptron. Usually the network consists
of an input layer, receiving the measurement vector
x, a hidden layer and an output layer of units
(neurons). In this configuration each unit of the
hidden layer realizes a hyperplane dividing the input
space into two semi-spaces. By combining such
semispaces the units of the output layer are able to
construct any polygonal partition of the input space.
For that reason it is theoretically possible to design
for each (consistent) fixed sample a correct Neural
Network classifier by constructing a sufficiently fine
partition of the input space. This may necessitate a
large number of neurons in the hidden layer. The
model parameters consist of the weights connecting
two units of successive layers. In the training phase
the sample is used to evaluate an error measure and
a gradient descent algorithm can be employed to
minimize this net error. The problem of getting stuck
in local minima is called training problem.
The structure is constituted of one layer for
inputs, one hidden layer and one layer for outputs
(figure 5).
Figure 5: Architecture of MLP network.
Obtained lithological classification by the MLP
ICPRAM2013-InternationalConferenceonPatternRecognitionApplicationsandMethods
704