Results. Across 50 trials, the 36 node HEDA took
an average of 9932 pattern evaluations before it ter-
minated having found 10 perfect scoring patterns.
A record of the samples made showed that none of
the randomly generated patterns used during train-
ing gained a perfect score, so the network was only
trained on less than perfect patterns. The average
trained model’s capacity was found to be 132, which
gives an average of 75 OF evaluations per pattern
found. Figure 5 shows two examples of random start-
ing patterns and their associated attractor states.
Figure 5: Two examples of fixed point attractors of the
HEDA
2
trained on the second order concept of symmetry.
The left hand column shows random starting points and the
right hand column shows the associated point attractor state.
7 CONCLUSIONS
It is possible to adapt both the Hebbian and Storkey
learning rules for Hopfield networks to allow them to
learn the attractor states corresponding to multiple lo-
cal maxima based on random samples from an objec-
tive function. We have experimentally shown that the
capacity of these networks is at least equal to the ca-
pacity of a Hopfield network trained directly on the
attractor points.
Networks trained in this way are able to find a set
of attractors by undirected sampling - that is with no
evolution of solutions - in a number of samples that is
a very small fraction of the size of the search space.
In a second order network, as network size, n
varies, we have seen that search space grows expo-
nentially with n, the number of local optima that can
be stored grows linearly with n and the time to find all
local optima grows quadratically with n.
Future work will address a number of area includ-
ing higher order networks and networks of variable
order; an evolutionary approach to training the net-
works when the number of local optima is higher than
the capacity of the network; the effects of spurious at-
tractors; the use of the Energy function of equation 9
as a proxy for objective function evaluations; and the
smoothing of noisy landscapes. Hopfield networks
have been extensively studied so there is a wealth of
research on which to draw during future work.
ACKNOWLEDGEMENTS
Thank you David Cairns and Leslie Smith for your
helpful comments on earlier versions of this paper.
REFERENCES
De Jong, K. (2006). Evolutionary computation : a unified
approach. MIT Press, Cambridge, Mass.
Goldberg, D. E. (1989). Genetic Algorithms in Search, Op-
timization, and Machine Learning. Addison-Wesley
Professional, 1 edition.
Hertz, J., Krogh, A., and Palmer, R. G. (1991). Introduc-
tion to the Theory of Neural Computation. Addison-
Wesley, New York.
Hopfield, J. J. (1982). Neural networks and physical sys-
tems with emergent collective computational abili-
ties. Proceedings of the National Academy of Sciences
USA, 79(8):2554–2558.
Hopfield, J. J. and Tank, D. W. (1985). Neural computa-
tion of decisions in optimization problems. Biological
Cybernetics, 52:141–152.
Jin, Y. (2005). A comprehensive survey of fitness approxi-
mation in evolutionary computation. Soft Computing
- A Fusion of Foundations, Methodologies and Appli-
cations, 9:3–12.
Kennedy, J. (2001). Swarm intelligence. Morgan Kaufmann
Publishers, San Francisco.
Kubota, T. (2007). A higher order associative memory with
mcculloch-pitts neurons and plastic synapses. In Neu-
ral Networks, 2007. IJCNN 2007. International Joint
Conference on, pages 1982 –1989.
McEliece, R., Posner, E., Rodemich, E., and Venkatesh,
S. (1987). The capacity of the hopfield associative
memory. Information Theory, IEEE Transactions on,
33(4):461 – 482.
M
¨
uhlenbein, H. and Paaß, G. (1996). From recombination
of genes to the estimation of distributions i. binary pa-
rameters. In Voigt, H.-M., Ebeling, W., Rechenberg,
I., and Schwefel, H.-P., editors, Parallel Problem Solv-
ing from Nature PPSN IV, volume 1141 of Lecture
Notes in Computer Science, pages 178–187. Springer
Berlin / Heidelberg.
Shakya, S., McCall, J., Brownlee, A., and Owusu, G.
(2012). Deum - distribution estimation using markov
networks. In Shakya, S. and Santana, R., editors,
Markov Networks in Evolutionary Computation, vol-
ume 14 of Adaptation, Learning, and Optimization,
pages 55–71. Springer Berlin Heidelberg.
Storkey, A. J. and Valabregue, R. (1999). The basins of at-
traction of a new hopfield learning rule. Neural Netw.,
12(6):869–876.
OntheCapacityofHopfieldNeuralNetworksasEDAsforSolvingCombinatorialOptimisationProblems
157