Authors:
Antoine Eiche
;
Daniel Chillet
;
Sebastien Pillement
and
Olivier Sentieys
Affiliation:
University of Rennes I, IRISA and INRIA, France
Keyword(s):
Hopfield neural networks, Parallelization, Stability, Optimization problems.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Enterprise Information Systems
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neural Network Software and Applications
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Stability and Instability in Artificial Neural Networks
;
Theory and Methods
Abstract:
Among the large number of possible optimization algorithms, Hopfield Neural Networks (HNN) propose interesting characteristics for an in-line use. Indeed, this particular optimization algorithm can produce solutions in brief delay. These solutions are produced by the HNN convergence which was originally defined for a sequential evaluation of neurons. While this sequential evaluation leads to long convergence time, we assume that this convergence can be accelerated through the parallel evaluation of neurons. However, the original constraints do not any longer ensure the convergence of the HNN evaluated in parallel. This article aims to show how the neurons can be evaluated in parallel in order to accelerate a hardware or multiprocessor implementation and to ensure the convergence. The parallelization method is illustrated on a simple task scheduling problem where we obtain an important acceleration related to the number of tasks. For instance, with a number of tasks equals to 20 the s
peedup factor is about 25.
(More)