Figure 6 presents the speedup factor compared
to the number of tasks. We can observe that the
speedup factor is equal or higher than t. To reach the
stable state, all neurons are evaluated several times.
One evaluation of all neurons is called an iteration.
The speedup factor could be higher than the number
of tasks because the parallelization can decrease the
number of iterations. This is mainly due to an ap-
propriated neurons evaluation order. In the sequen-
tial mode, we can agree the diagonal evaluation or-
der is the fastest order to reach a stable state: at two
consecutive times, neurons corresponding to different
tasks and ticks are evaluated. Concerning the parallel
mode, a packet contains a diagonal of neurons, then
neurons are implicitly evaluated diagonal by diagonal.
0
5
10
15
20
25
30
35
40
0
5
10
15
20
25
30
Speedup
Task number
Speed up
Identity
Figure 6: Speed up factor versus number of tasks.
The experiment results show a significant gain ob-
tained by our parallelization method. Moreover, the
number of iterations needed to reach a stable state is,
in some cases, decreased by this method.
The methods presented in Section II allow for im-
proving the sequential evaluation as well as the par-
allel evaluation, therefore, our speedup factor should
not be impacted. Combined with our approach, they
can further improve the evaluation time of the HNN
without affecting the convergence property.
7 CONCLUSIONS
We presented a parallelization method to improve the
convergence time of HNN to solve optimization prob-
lems. This approach has been applied on the schedul-
ing problem which can be easily defined as an opti-
mization problem. The HNN associated to this prob-
lem has been built by adding several constraints (such
as k-outof-N rules) on some sets of neurons. We
demonstrated that the network convergence is main-
tained when a subset of disconnected neurons is eval-
uated in parallel. This means that when two neurons
do not belong to the same constraint, they can be eval-
uated in parallel. Because the construction of a HNN
based on the addition of several constraint rules is re-
ally common, we assume that this method can be used
for large number of optimization problems modelled
by HNNs.
The parallelization of neural evaluations leads to
an important improvement of the convergence time.
We have seen that on the task scheduling problem,
the speedup depends on the number of tasks. Thus,
for a scheduling problem with 20 tasks, the speedup
is about 25. Contrary to other works about parallel
evaluation of a HNN, our method preserves the con-
vergence property which permits to simplify the im-
plementation of a HNN.
REFERENCES
Del Balio, R., Tarantino, E., and Vaccaro, R. (1992). A par-
allel algorithm for asynchronous hopfield neural net-
works. In IEEE International Workshop on Emerg-
ing Technologies and Factory Automation, pages 666
–669.
Domeika, M. J. and Page, E. W. (1996). Hopfield neural
network simulation on a massively parallel machine.
Information Sciences, 91(1-2):133 – 145.
Hopfield, J. and Tank, D. (1985). ”Neural” computation of
decisions in optimization problems. Biological cyber-
netics, 52(3):141–152.
Kamp, Y. and Hasler, M. (1990). Recursive neural networks
for associative memory. John Wiley & Sons, Inc.
Ma
´
ndziuk, J. (2002). Neural networks for the N-Queens
problem: a review. Control and Cybernetics,
31(2):217–248.
Sidney, J. (1977). Optimal single-machine scheduling with
earliness and tardiness penalties. Operations Re-
search, 25(1):62–69.
Smith, K. (1999). Neural Networks for Combinatorial Op-
timization: A Review of More Than a Decade of Re-
search. Informs Journal on Computing, 11:15–34.
Tagliarini, G., Christ, J., and Page, E. (1991). Optimization
using neural networks. IEEE Transactions on Com-
puters, 40(12):1347–1358.
Wang, C., Wang, H., and Sun, F. (2008). Hopfield neu-
ral network approach for task scheduling in a grid en-
vironment. In Proceedings of the 2008 International
Conference on Computer Science and Software Engi-
neering - Volume 04, pages 811–814.
Wilson, R. C. (2009). Parallel hopfield networks. Neural
Computation, 21:831–850.
Xu, Z., Hu, G., and Kwong, C. (1996). Asymmet-
ric Hopfield-type networks: theory and applications.
Neural Networks, 9(3):483–501.
PARALLEL EVALUATION OF HOPFIELD NEURAL NETWORKS
253