tion that it is incorrect because the A-SOM consists
of 225 neurons and is not a continuous surface but a
discretized representation.
In the middle of the upper row of Fig. 4 we can
see that all centers of activation for the generalization
samples are correctly located in SOM1 besides 1 and
6 which are on the border to the correct Voronoi cell
(but this should probably not be considered an indica-
tion of incorrectnessfor the same reason as mentioned
above), and 2 which is located close to the correct
Voronoi cell.
Rightmost of the upper row of Fig. 4 we can
see that all centers of activation for the generaliza-
tion samples are correctly located in SOM2 besides 2,
which is located close to the correct Voronoi cell.
Leftmost in the middle row of Fig. 4 we can see
that the centers of activation for all the generalization
samples besides sample 8 (which should probably not
be considered an indication of incorrectness for the
same reason as mentioned above) is within the cor-
rect Voronoi cell in the A-SOM when it receives main
input as well as the activity of SOM1 as input.
In the middle of the middle row of Fig. 4 we can
see that the centers of activation for all the generaliza-
tion samples besides sample 8 (which should proba-
bly not be considered an indication of incorrectness
for the same reason as mentioned above) is within the
correct Voronoi cell in the A-SOM when it receives
main input as well as the activity of SOM2 as input.
Rightmost of the middle row of Fig. 4 we can see
that the centers of activation for all the generalization
samples besides sample 8 (which should probably not
be considered an indication of incorrectness for the
same reason as mentioned above) is within the correct
Voronoi cell in the A-SOM when it receives main in-
put as well as the activities of both SOM1 and SOM2
as input.
Leftmost of the lower row of Fig. 4 we can see that
the centers of activationfor all the generalization sam-
ples besides sample 2 and 10, i.e. 80%, is within the
correct Voronoi cell in the A-SOM when it receives
the activity of SOM1 as its only input.
In the middle of the lower row of Fig. 4 we can
see that the centers of activation for all the generaliza-
tion samples besides sample 2, i.e. 90%, is within the
correct Voronoi cell in the A-SOM when it receives
the activity of SOM2 as its only input.
Rightmost of the lower row of Fig. 4 we can see
that the centers of activation for all the generaliza-
tion samples besides sample 2 and 10, i.e. 80%, is
within the correct Voronoi cell in the A-SOM when it
receives the activities of SOM1 and SOM2 as its only
input.
In Fig. 5 we can see a graphical representation of
the activity in the two SOMs as well as total, main
and ancillary activities of the A-SOM while receiving
a sample from the generalization set. The lighter an
area is in this depiction, the higher the activity is in
that area.
4 DISCUSSION
We have presented and experimented with a novel
variant of the Self-Organizing Map (SOM) called the
Associative Self-Organizing Map (A-SOM), which
develops a representation of its input space but also
learns to associate its activity with the activities of
an arbitrary number of ancillary SOMs. In our ex-
periments we connected an A-SOM to two ancillary
SOMs and all these were trained and tested with a
set of random samples of points from a subset of the
plane. In addition we tested the generalization ability
of the system by another set of random points gen-
erated from the same subset of the plane. The algo-
rithm was generalized to enable association with an
arbitrary number of ancillary SOMs. Moreover this
study have also tested the ability of an A-SOM based
system to generalize its learning to new samples. The
ability of the A-SOM proved to be good, with 100%
accuracy with the training set and about 80-90% ac-
curacy in the generalization tests, depending on which
constellation of inputs which was provided to the sys-
tem. It was also observed that the generalization in
the ordinary SOMs was not perfect. If this had been
perfect the generalization ability would probably be
even better. This is probably a matter of optimizing
the parameter settings.
In this experiment we connected an A-SOM with
two SOMs, but we can see no reasons to why it should
not be possible to connect an arbitrary numbers of A-
SOMs to each other. Johnsson and Balkenius success-
fully connected two A-SOMs with each other in the
context of a hardness/texture sensing system (Johns-
son and Balkenius, 2008). In the present study we
used the same training set and the same generaliza-
tion set as input for the A-SOM and for each of the
two SOMs. This was for simplicity reasons and in
particular because it made it easier to present the re-
sults and to relate the organizations of the SOMs and
the A-SOM to each other.
It is interesting to speculate, and later test, whether
there are any restrictions on the sets that are used as
input to the different SOMs and A-SOMs in this kind
of system. A reasonable guess would be that to learn
to associate the activity arising from the training sets
impose no restrictions on the training sets, but when it
comes to generalization there would probably be one
IJCCI 2009 - International Joint Conference on Computational Intelligence
368