but is not possible to create new neurons or delete
existing ones. There is, however, the possibility of
deactivating a neuron by assigning zero values to its
input and output synapses. With some external
process of neuron activation/deactivation, it would
be straightforward to insert/delete the proper triples
at L
Ψ
. More details can be found in (Neto, 2006)
including how to execute this network
representation.
There are ample possibilities for optimization.
The network modules are not all active at once.
Except for high-parallel algorithms (where the
parallelization was thought and designed by the
programmer and is, therefore, not that important in
this stage) there is only a small number of modules
active at each given moment. So, many triples (those
from the inactive modules) are not used and should
not enter in the next computation step. How can we
easily deduce what triples should be calculated?
Herein, the
IN/OUT synchronization mechanism is
again helpful. Since a certain module M is only
activated after its input neuron receives an activation
signal (i.e., the previous synapse receives a 1) that
means that we should keep the triples of those input
synapses – let’s denote them input triples – as
guards of the set of triples representing the
remaining module structure. So, every time an input
triple is activated, the system will upload the entire
triple structure of that module (notice that this may
or may not include the inner sub-modules,
depending on the number of triples these sub-
modules of arbitrary complexity may represent) and
compute it along with all the other active triples.
When an active module ends its computation, the
output triple (representing the synapse that transfers
the output signal to the input neuron of the next
module) is activated and the system has enough
information to remove the module structure from the
pool of active triples.
Using this mechanism, the number of triples in
execution depends only of the number of active
modules and not in the entire network structure. This
will speed the execution of single modules and
provide a better efficient use of the available parallel
processing power.
6 CONCLUSIONS
Neural networks can be used to compute the
execution of symbolic algorithms. The fact that
neural nets are massive parallel models of
computation, allow us to use this feature in several
ways to speed the calculation of modules and
expressions that do not have precedence over each
other. We have shown two possible uses at this
level: expression parallelization and parallel blocks.
Also, since neural nets can be decomposed into
triplets (each representing a synaptic connection), it
is also possible to speed computation by allocating
sets of synaptic triples into different CPU’s to
calculate the next computing state.
ACKNOWLEDGEMENTS
This work was supported by LabMAg (Laboratório
de Modelação de Agentes) and FCT (Fundação para
a Ciência e Tecnologia).
REFERENCES
Carnell, A., Richardson, D., 2007. Parallel computation in
spiking neural nets, Theoretical Computer Science
[386]1-2, Elsevier, 57–72.
Gruau, F., Ratajszczak, J., Wibe, J., 1995. A neural
compiler, Theoretical Computer Science, 141, 1–52.
Herz, A., Goltisch, T., Machens, C., Jaeger, D., 2006.
Modelling Single-Neuron Dynamics & Computations:
A Balance of Detail and Abstraction, Science, 314,
80–85.
McCulloch, W., Pitts, W., 1943. A logical calculus of the
ideas immanent in nervous activity, Bulletin of
Mathematical Biophysics, 5, 115–133.
Neto, J., Siegelmann, H., and Costa, J., 1998. On the
Implementation of Programming Languages with
Neural Nets, First International Conference on
Computing Anticipatory Systems, 1, 201–208.
Neto, J., Costa, J., and Siegelmann, H., 2003. Symbolic
Processing in Neural Networks, Journal of Brazilian
Computer Society, [8]3, 58–70.
Neto, J. 2006. A Virtual Machine for Neural Computers,
16
th
International Conference of Artificial Neural
Networks, in S. Kollias et al. (eds.), Lecture Notes of
Computer Science 4131, Springer-Verlag, 525–534.
Siegelmann, H. and Sontag, E., 1994. Analog
Computation via Neural Networks”, Theoretical
Computer Science, 131, Elsevier, 331–360.
Siegelmann, H., 1999. Neural Networks and Analog
Computation, Beyond the Turing Limit, Birkhäuser.
AUTOMATIC PARALLELIZATION IN NEURAL COMPUTERS
401