INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS
ARE SUPER-TURING
J´er´emie Cabessa
1,2
1
Department of Information Systems, University of Lausanne, CH-1015 Lausanne, Switzerland
2
Department of Computer Science, University of Massachusetts Amherst, Amherst, MA 01003, U.S.A.
Keywords:
Recurrent neural networks, Turing machines, Reactive systems, Evolving systems, Interactive computation,
Neural computation, Super-turing.
Abstract:
We consider a model of evolving recurrent neural networks where the synaptic weights can change over time,
and we study the computational power of such networks in a basic context of interactive computation. In this
framework, we prove that both models of rational- and real-weighted interactive evolving neural networks are
computationally equivalent to interactive Turing machines with advice, and hence capable of super-Turing ca-
pabilities. These results support the idea that some intrinsic feature of biological intelligence might be beyond
the scope of the current state of artificial intelligence, and that the concept of evolution might be strongly
involved in the computational capabilities of biological neural networks. It also shows that the computational
power of interactive evolving neural networks is by no means influenced by nature of their synaptic weights.
1 INTRODUCTION
Understanding the intrinsic nature of biological intel-
ligence is an issue of central importance. In this con-
text, much interest has been focused on comparing the
computational capabilities of diverse theoretical neu-
ral models and abstract computing devices (McCul-
loch and Pitts, 1943; Kleene, 1956; Minsky, 1967;
Siegelmann and Sontag, 1994; Siegelmann and Son-
tag, 1995; Siegelmann, 1999). As a consequence,
the computational power of neural networks has been
shown to be intimately related to the nature of their
synaptic weights and activation functions, hence ca-
pable to range from finite state automata up to super-
Turing capabilities.
However, in this global line of thinking, the neu-
ral models which have been considered fail to cap-
ture some essential biological features that are signifi-
cantly involved in the processing of information in the
brain. In particular, the plasticity of biological neural
networks as well as the interactive nature of informa-
tion processing in bio-inspired complex systems have
not been taken into consideration.
The present paper falls within this perspective and
extends the works by Cabessa and Siegelmann con-
cerning the computational power of evolving or in-
teractive neural networks (Cabessa and Siegelmann,
2011b; Cabessa and Siegelmann, 2011a). More pre-
cisely, here, we consider a model of evolving recur-
rent neural networks where the synaptic strengths of
the neurons can change over time rather than stay-
ing static, and we study the computational capabil-
ities of such networks in a basic context of interac-
tive computation in line with the framework proposed
by van Leeuwen and Wiedermann (van Leeuwen and
Wiedermann, 2001a; van Leeuwen and Wiedermann,
2008). In this context, we prove that rational- and
real-weighted interactive evolving recurrent neural
networks are both computationally equivalent to in-
teractive Turing machines with advice, thus capable
of super-Turing capabilities. These results support the
idea that some intrinsic feature of biological intelli-
gence might be beyond the scope of the current state
of artificial intelligence, and that the concept of evolu-
tion might be strongly involved in the computational
capabilities of biological neural networks. They also
show that the nature of the synaptic weights has no
influence on the computational power of interactive
evolving neural networks.
2 PRELIMINARIES
Before entering into further considerations, the fol-
lowing definitions and notations need to be intro-
duced. Given the binary bit alphabet {0, 1}, we let
{0, 1}
, {0, 1}
+
, {0, 1}
n
, and {0, 1}
ω
denote respec-
328
Cabessa J..
INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING.
DOI: 10.5220/0003740603280333
In Proceedings of the 4th International Conference on Agents and Artificial Intelligence (ICAART-2012), pages 328-333
ISBN: 978-989-8425-95-9
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
tively the sets of finite words, non-empty finite words,
finite words of length n, and infinite words, all of
them over alphabet {0, 1}. We also let {0, 1}
ω
=
{0, 1}
{0, 1}
ω
be the set of all possible words (fi-
nite or infinite) over {0, 1}.
Any function ϕ : {0, 1}
ω
{0, 1}
ω
will be re-
ferred to as an ω-translation.
Besides, for any x {0, 1}
ω
, the length of x is de-
noted by |x| and corresponds to the number of letters
contained in x. If x is non-empty, we let x(i) denote
the (i + 1)-th letter of x, for any 0 i < |x|. Hence,
x can be written as x = x(0)x(1)· ·· x(|x| 1) if it is
finite, and as x = x(0)x(1)x(2)· ·· otherwise. More-
over, the concatenation of x and y is written x · y or
sometimes simply xy. The empty word is denoted λ.
3 INTERACTIVE COMPUTATION
3.1 The Interactive Paradigm
Interactive computation refers to the computational
framework where systems may react or interact with
each other as well as with their environment dur-
ing the computation (Goldin et al., 2006). This
paradigm was theorized in contrast to classical com-
putation which rather proceeds in a closed-box fash-
ion and was argued to “no longer fully corresponds
to the current notions of computing in modern sys-
tems” (van Leeuwen and Wiedermann, 2008). Inter-
active computation also provides a particularly appro-
priate framework for the consideration of natural and
bio-inspired complex information processing systems
(van Leeuwen and Wiedermann, 2001a; van Leeuwen
and Wiedermann, 2008).
The general interactive computational paradigm
consists of a step by step exchange of information
between a system and its environment. In order
to capture the unpredictability of next inputs at any
time step, the dynamically generated input streams
need to be modeled by potentially infinite sequences
of symbols (the case of finite sequences of symbols
would necessarily reduce to the classical computa-
tional framework) (Wegner, 1998; van Leeuwen and
Wiedermann, 2008).
Throughout this paper, we consider a basic in-
teractive computational scenario where at every time
step, the environment sends a non-empty input bit
to the system (full environment activity condition),
the system next updates its current state accordingly,
and then either produces a corresponding output bit,
or remains silent for a while to express the need of
some internal computational phase before outputting
a new bit, or remains silent forever to express the
fact that it has died. Consequently, after infinitely
many time steps, the system will havereceived an infi-
nite sequence of consecutive input bits and translated
it into a corresponding finite or infinite sequence of
not necessarily consecutive output bits. Accordingly,
any interactive system S realizes an ω-translation ϕ
S
:
{0, 1}
ω
{0, 1}
ω
.
3.2 Interactive Turing Machines
An interactive Turing machine (I-TM) M consists of
a classical Turing machine yet provided with input
and output ports rather than tapes in order to process
the interactive sequential exchange of information be-
tween the device and its environment (van Leeuwen
and Wiedermann, 2001a). According to our interac-
tive scenario, it is assumed that at every time step, the
environment sends a non-silent input bit to the ma-
chine and the machine answers by either producing
a corresponding output bit or rather remaining silent
(expressed by the fact of outputting the λ symbol).
According to this definition, for any infinite input
stream s {0, 1}
ω
, we define the corresponding out-
put stream o
s
{0, 1}
ω
of M as the finite or infi-
nite subsequence of (non-λ) output bits produced by
M after having processed input s. In this manner,
any machine M naturally induces an ω-translation
ϕ
M
: {0, 1}
ω
{0, 1}
ω
defined by ϕ
M
(s) = o
s
,
for each s {0, 1}
ω
. Finally, an ω-translation ψ :
{0, 1}
ω
{0, 1}
ω
is said to be realizable by some
interactive Turing machine iff there exists some I-TM
M such that ϕ
M
= ψ.
Besides, an interactive Turing machine with ad-
vice (I-TM/A) M consists of an interactive Turing
machine provided with an advice mechanism (van
Leeuwen and Wiedermann, 2001a). The mechanism
comes in the form of an advice function α : N
{0, 1}
. Moreover, the machine M uses two auxiliary
special tapes, an advice input tape and an advice out-
put tape, as well as a designated advice state. During
its computation, M can write the binary representa-
tion of an integer m on its advice input tape, one bit at
a time. Yet at time step n, the number m is not allowed
to exceed n. Then, at any chosen time, the machine
can enter its designated advice state and then have the
finite string α(m) be written on the advice output tape
in one time step, replacing the previous content of the
tape. The machine can repeat this extra-recursivecall-
ing process as many times as it wants during its infi-
nite computation.
Once again, according to our interactive sce-
nario, any I-TM/A M induces an ω-translation ϕ
M
:
{0, 1}
ω
{0, 1}
ω
which maps every infinite input
stream s to the corresponding finite or infinite output
INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING
329
stream o
s
produced by M . Finally, an ω-translation
ψ : {0, 1}
ω
{0, 1}
ω
is said to be realizable by
some interactive Turing machine with advice iff there
exists some I-TM/A M such that ϕ
M
= ψ.
4 INTERACTIVE EVOLVING
RECURRENT NEURAL
NETWORKS
We now consider a natural extension to the present
interactive framework of the model of evolving recur-
rent neural network described by Cabessa and Siegel-
mann in (Cabessa and Siegelmann, 2011b).
An evolving recurrent neural network (Ev-RNN)
consists of a synchronous network of neurons (or pro-
cessors) related together in a general architecture
not necessarily loop free or symmetric. The network
contains a finite number of neurons (x
j
)
N
j=1
, as well as
M parallel input lines carrying the input stream trans-
mitted by the environment into M of the N neurons,
and P designated output neurons among the N whose
role is to communicate the output of the network to
the environment. Furthermore, the synaptic connec-
tions between the neurons are assumed to be time de-
pendent rather than static. At each time step, the acti-
vation value of every neuron is updated by applying a
linear-sigmoid function to some weighted affine com-
bination of values of other neurons or inputs at previ-
ous time step.
Formally, given the activation values of the inter-
nal and input neurons (x
j
)
N
j=1
and (u
j
)
M
j=1
at time t,
the activation value of each neuron x
i
at time t + 1 is
then updated by the following equation
x
i
(t + 1) = σ
N
j=1
a
ij
(t) · x
j
(t) +
M
j=1
b
ij
(t) · u
j
(t) + c
i
(t)
!
(1)
for i = 1, . . . , N, where all a
ij
(t), b
ij
(t), and c
i
(t)
are time dependent values describing the evolving
weighted synaptic connections and weighted bias of
the network, and σ is the classical saturated-linear ac-
tivation function defined by σ(x) = 0 if x < 0, σ(x) =
x if 0 x 1, and σ(x) = 1 if x > 1.
In order to stay consistent with our interactivesce-
nario, we need to define the notion of an interactive
evolving recurrent neural network (I-Ev-RNN) which
adheres to a rigid encoding of the way input and out-
put are interactively processed between the environ-
ment and the network.
First of all, we assume that any I-Ev-RNN is pro-
vided with a single binary input line u whose role is
to transmit to the network the infinite input stream of
bits sent by the environment. We also suppose that
any I-Ev-RNN is equipped with two binary output
lines, a data line y
d
and a validation line y
v
. The role
of the data line is to carry the output stream of the
network, while the role of the validation line is to de-
scribe when the data line is active and when it is silent.
Accordingly, the output stream transmitted by the net-
work to the environment will be defined as the (finite
or infinite) subsequence of successive data bits that
occur simultaneously with positive validation bits.
Hence, if N is an I-Ev-RNN with initial activa-
tion values x
i
(0) = 0 for i = 1, . . . , N, then any infinite
input stream
s = s(0)s(1)s(2)···
{0, 1}
ω
transmitted to input line u induces via Equa-
tion (1) a corresponding pair of infinite streams
(y
d
(0)y
d
(1)y
d
(2)·· · , y
v
(0)y
v
(1)y
v
(2)·· ·)
{0, 1}
ω
× { 0, 1}
ω
. The output stream of N accord-
ing to input s is then given by the finite or infinite
subsequence o
s
of successive data bits that occur si-
multaneously with positive validation bits, namely
o
s
= hy
d
(i) : i N and y
v
(i) = 1i {0, 1}
ω
.
It follows that any I-Ev-RNN N naturally induces an
ω-translation ϕ
N
: {0, 1}
ω
{0, 1}
ω
defined by
ϕ
N
(s) = o
s
, for each s {0, 1}
ω
. An ω-translation
ψ : {0, 1}
ω
{0, 1}
ω
is said to be realizable by
some I-Ev-RNN iff there exists some I-Ev-RNN N
such that ϕ
N
= ψ.
Finally, throughout this paper, two models of in-
teractive evolving recurrent neural networks are con-
sidered according to whether their underlying synap-
tic weights are confined to the class of rational or real
numbers. A rational interactive evolving recurrent
neural network (I-Ev-RNN[Q]) denotes an I-Ev-RNN
whose all synaptic weights are rational numbers, and
a real interactive evolving recurrent neural network
(I-Ev-RNN[R]) stands for an I-Ev-RNN whose all
synaptic weights are real numbers. Note that since
rational numbers are included in real numbers, ev-
ery I-Ev-RNN[Q] is also a particular I-Ev-RNN[R]
by definition.
5 THE COMPUTATIONAL
POWER OF INTERACTIVE
EVOLVING RECURRENT
NEURAL NETWORKS
In this section, we prove that interactive evolving re-
current neural networks are computationally equiva-
lent to interactive Turing machine with advice, irre-
ICAART 2012 - International Conference on Agents and Artificial Intelligence
330
spective of whether their synaptic weights are ratio-
nal or real. It directly follows that interactive evolv-
ing neural networks are indeed capable super-Turing
computational capabilities.
Towards this purpose, we first show that the two
models of rational- and real-weighted neural net-
works under considerations are indeed computation-
ally equivalent.
Proposition 1. I-Ev-RNN[Q]s and I-Ev-RNN[R]s
are computationally equivalent.
Proof. First of all, recall that every I-Ev-RNN[Q]
is also a I-Ev-RNN[R] by definition. Hence, any
ω-translation ϕ : {0, 1}
ω
{0, 1}
ω
realizable by
some I-Ev-RNN[Q] N is also realizable by some I-
Ev-RNN[R], namely N itself.
Conversely, let N be some I-Ev-RNN[R]. We
prove the existence of an I-Ev-RNN[Q] N
which re-
alizes the same ω-translation as N . The idea is to en-
code all possible intermediate output values of N into
some evolving synaptic weight of N
, and to make
N
decode and output these successive values in or-
der to answer precisely like N on everypossible input
stream.
More precisely, for every finite word x {0, 1}
+
,
let N (x) {0, 1, 2} denote the encoding of the output
answer of N on input x at precise time step t = |x|,
where N (x) = 0, N (x) = 1, and N (x) = 2 respec-
tively mean that N has answered λ, 0, and 1 on in-
put x at time step t = |x|. Moreover, for any n > 0,
let x
n,1
, . . . , x
n,2
n
be the lexicographical enumeration
of the words of { 0, 1}
n
, and let w
n
{0, 1, 2, 3}
be
the finite word given by w
n
= 3·N (x
n,1
)· 3· N (x
n,2
)·
3· ·· 3 · N (x
n,2
n
) · 3. Then, consider the rational en-
coding q
n
of the word w
n
given by
q
n
=
|w
n
|
i=1
2· w
n
(i) + 1
8
i
.
It follows that q
n
]0, 1[ for all n > 0, and that q
n
6=
q
n+1
, since w
n
6= w
n+1
for all n > 0. This encoding
provides a corresponding decoding procedure which
is recursive (Siegelmann and Sontag, 1994; Siegel-
mann and Sontag, 1995). Hence, every finite word
w
n
can be decoded from the value q
n
by some Turing
machine, or equivalently, by some rational recurrent
neural network. This feature is important for our pur-
pose.
Now, the I-Ev-RNN[Q] N
consists of one evolv-
ing and one non-evolving rational-weighted sub-
network connected together in a specific manner.
More precisely, the evolving rational-weighted part
of N
is made up of a single designated processor x
e
receiving a background activity of evolving intensity
c
e
(t). The synaptic weight c
e
(t) successively takes
the rational bounded values q
1
, q
2
, q
3
, . . ., by switch-
ing from value q
k
to q
k+1
after t
k
time steps, for some
t
k
large enough to satisfy the conditions of the pro-
cedure described below. The non-evolving rational-
weighted part of N
is designed and connected to the
neuron x
e
in such a way as to perform the following
recursive procedure: for any infinite input stream s
{0, 1}
ω
provided bit by bit, the sub-network stores in
its memory the successive incoming bits s(0), s(1), . . .
of s, and simultaneously, for each successivet > 0, the
sub-network first waits for the synaptic weight q
t
to
occur as a background activity of neuron x
e
, decodes
the output value N (s(0)s(1)· ·· s(t 1)) from q
t
, out-
puts it, and then continues the same routine with re-
spect to the next step t + 1. Note that the equivalence
between Turing machines and rational-weighted re-
current neural networks ensures that the above re-
cursive procedure can indeed be performed by some
non-evolving rational-weighted recurrent neural sub-
network (Siegelmann and Sontag, 1995).
In this way, the infinite sequence of successive
non-empty output bits provided by networks N and
N
are the very same, so that N and N
indeed real-
ize the same ω-translation.
We now prove that rational-weighted interactive
evolving neural networks are computationally equiv-
alent to interactive Turing machines with advice.
Proposition 2. I-Ev-RNN[Q]s and I-TM/As are com-
putationally equivalent.
Proof. First of all, let N be some I-Ev-RNN[Q]. We
give the description of an I-TM/A M which realizes
the same ω-translation as N . Towards this purpose,
for each t > 0, let N (t) be the description of the
synaptic weights of network N at time t. Since all
synaptic weights of N are rational, the whole synap-
tic description N (t) can be encoded by some finite
word α(t) {0, 1}
+
(every rational number can be
encoded by some finite word of bits, hence so does
every finite sequence of rational numbers).
Now, consider the I-TM/A M whose advice func-
tion is precisely α, and which, thanks to the advice α,
provides a step by step simulation of the behavior of
N in order to eventually produce the very same out-
put stream as N . More precisely, on every infinite
input stream s {0, 1}
ω
, the machine M stores in its
memory the successive incoming bits s(0), s(1), . . . of
s, and simultaneously, for each successive t 0, it re-
trieves the activation values
x (t) of N at time t from
its memory, calls its advice α(t) in order to retrieve
the synaptic description N (t), uses this information
in order to compute via Equation (1) the activation
and output values
x (t + 1), y
d
(t + 1), and y
v
(t + 1) of
N at next time step t + 1, provides the corresponding
INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING
331
output encoded by y
d
(t + 1) and y
v
(t + 1), and finally
stores the activation values
x (t +1) of N in order to
be able to repeat the same routine with respect to the
next step t + 1.
In this way, the infinite sequence of successive
non-empty output bits provided by the network N
and the machine M are the very same, so that N and
M indeed realize the same ω-translation.
Conversely, let M be some I-TM/A with advice
function α. We build an I-Ev-RNN[Q] N which re-
alizes the same ω-translation as M . The idea is to en-
code the successiveadvice values α(0), α(1), α(2), . . .
of M into some evolving rational synaptic weight of
N , and to store them in the memory of N in order to
be capable of simulating with N every recursive and
extra-recursive computational step of M .
More precisely, for each n 0, let w
α(n)
{0, 1, 2}
be the finite word given by w
α(n)
= 2·α(0)·
2 · α(1) · 2···2 · α(n) · 2, and let q
α(n)
be the rational
encoding of the word w
α(n)
given by
q
α(n)
=
|w
n
|
i=1
2· w
n
(i) + 1
6
i
.
Note that q
α(n)
]0, 1[ for all n > 0, and that q
α(n)
6=
q
α(n+1)
, since w
α(n)
6= w
α(n+1)
for all n > 0. More-
over, it can be shown that the finite word w
α(n)
can
be decoded from the value q
α(n)
by some Turing ma-
chine, or equivalently, by some rational recurrent neu-
ral network (Siegelmann and Sontag, 1994; Siegel-
mann and Sontag, 1995).
Now, the I-Ev-RNN[Q] N consists of one evolv-
ing and one non-evolving rational-weighted sub-
network connected together. More precisely, the
evolving rational-weighted part of N is made up of
a single designated processor x
e
receiving a back-
ground activity of evolving intensity c
e
(0) = q
α(0)
,
c
e
(1) = q
α(1)
, c
e
(2) = q
α(2)
, . . .. The non-evolving
rational-weighted part of N is designed and con-
nected to x
e
in order to simulate the behavior of M
as follows: every recursive computational step of M
is simulated by N in the classical way (Siegelmann
and Sontag, 1995); moreover, every time M proceeds
to some extra-recursive call to some value α(m), the
network stores the current synaptic weight q
α(t)
in its
memory, retrieves the string α(m) from the rational
value q
α(t)
– which is possible as one necessarily has
m t, since N cannot proceed faster than M by con-
struction –, and then pursues the simulation of the
next recursive step of M in the classical way.
In this manner, the infinite sequence of successive
non-empty output bits provided by the machine M
and the network N are the very same on every pos-
sible infinite input stream, so that M and N indeed
realize the same ω-translation.
Propositions 1 and 2 directly imply the equiva-
lence between interactive evolving recurrent neural
networks and interactive Turing machines with ad-
vice. Since interactive Turing machines with advice
are strictly more powerful than their classical coun-
terparts (van Leeuwen and Wiedermann, 2001a; van
Leeuwen and Wiedermann, 2001b), it follows that in-
teractive evolving networks are capable of a super-
Turing computational power, irrespective of whether
their underlying synaptic weights are rational or real.
Theorem 1. I-Ev-RNN[Q]s, I-Ev-RNN[R]s, and I-
TM/As are equivalent super-Turing models of compu-
tation.
6 DISCUSSION
The present paper provides a characterization of the
computational power of evolving recurrent neural net-
works in a basic context of interactive and active
memory computation. It is shown that interactive
evolving neural networks are computationally equiv-
alent to interactive machines with advice, irrespective
of whether their underlying synaptic weights are ra-
tional or real. Consequently, the model of interactive
evolving neural networks under consideration is po-
tentially capable of super-Turing computational capa-
bilities.
These results provide a proper generalization to
the interactive context of the super-Turing and equiv-
alent capabilities of rational- and real-weighted evolv-
ing neural networks established in the case of classical
computation (Cabessa and Siegelmann, 2011b).
In order to provide a deeper understanding of
the present contribution, the results concerning the
computational power of interactive static recurrent
neural networks need to be recalled. In the static
case, rational- and real-weighted interactive neu-
ral networks (resp. denoted by I-St-RNN[Q]s and
I-St-RNN[R]s) were proven to be computationally
equivalent to interactive Turing machines and in-
teractive Turing machines with advice, respectively
(Cabessa and Siegelmann, 2011a). Consequently, I-
Ev-RNN[Q]s, I-Ev-RNN[R]s, and I-St-RNN[R]s are
all computationally equivalent to I-TM/As, whereas
I-St-RNN[Q]s are equivalent to I-TMs.
Given such considerations, the case of rational-
weighted interactive neural networks appears to be of
specific interest. In this context, the translation from
the static to the evolving framework really brings
up an additional super-Turing computational power
to the networks. However, it is worth noting that
such super-Turing capabilities can only be achieved
in cases where the evolving synaptic patters are them-
ICAART 2012 - International Conference on Agents and Artificial Intelligence
332
selves non-recursive (i.e., non Turing-computable),
since the consideration of any kind of recursive evolu-
tion would necessarily restrain the corresponding net-
works to no more than Turing capabilities. Hence, ac-
cording to this model, the existence of super-Turing
potentialities of evolving neural networks depends on
the possibility for “nature” to realize non-recursive
patterns of synaptic evolution.
By contrast, in the case of real-weighted interac-
tive neural networks, the translation from the static
to the evolving framework doesn’t bring any addi-
tional computational power to the networks. In other
words, the computational capabilities brought up by
the power of the continuum cannot be overcome by
incorporating some further possibilities of synaptic
evolution in the model.
To summarize, the possibility of synaptic evolu-
tion in a basic first-order interactive rate neural model
provides an alternative and equivalent way to the con-
sideration of analog synaptic weights towards the
achievement super-Turing computational capabilities
of neural networks. Yet even if the concepts of evo-
lution on the one hand and analog continuum on the
other hand turn out to be mathematically equivalent
in this sense, they are nevertheless conceptually well
distinct. Indeed, while the power of the continuum
is a pure conceptualization of the mind, the synap-
tic plasticity of the networks is itself something really
observable in nature.
The present work is envisioned to be extended in
three main directions. Firstly, a deeper study of the
issue from the perspective of computational complex-
ity could be of interest. Indeed, the simulation of an I-
Ev-RNN[R] N by some I-Ev-RNN[Q] N
described
in the proof of Proposition 1 is clearly not effective
in the sense that for any output move of N , the net-
work N
needs first to decode the word w
n
of size ex-
ponential in n before being capable of providing the
same output as N . In the proof of Proposition 2, the
effectivity of the two simulations that are described
depend on the complexity of the synaptic configura-
tions N (t) of N as well as on the complexity of the
advice function α(n) of M .
Secondly, it is expected to consider more realistic
neural models capable of capturing biological mech-
anisms that are significantly involved in the computa-
tional and dynamical capabilities of neural networks
as well as in the processing of information in the brain
in general. For instance, the consideration of biologi-
cal features such as spike timing dependent plasticity,
neural birth and death, apoptosis, chaotic behaviors of
neural networks could be of specific interest.
Thirdly, it is envision to consider more realistic
paradigms of interactive computation, where the pro-
cesses of interaction would be more elaborated and
biologically oriented, involving not only the network
and its environment, but also several distinct compo-
nents of the network as well as different aspects of the
environment.
Finally, we believe that the study of the computa-
tional power of neural networks from the perspective
of theoretical computer science shall ultimately bring
further insight towards a better understanding of the
intrinsic nature of biological intelligence.
REFERENCES
Cabessa, J. and Siegelmann, H. T. (2011a). The computa-
tional power of interactive recurrent neural networks.
Submitted to Neural Comput.
Cabessa, J. and Siegelmann, H. T. (2011b). Evolving re-
current neural networks are super-Turing. In Interna-
tional Joint Conference on Neural Networks, IJCNN
2011, pages 3200–3206. IEEE.
Goldin, D., Smolka, S. A., and Wegner, P. (2006). Inter-
active Computation: The New Paradigm. Springer-
Verlag New York, Inc., Secaucus, NJ, USA.
Kleene, S. C. (1956). Representation of events in nerve nets
and finite automata. In Automata Studies, volume 34
of Annals of Mathematics Studies. Princeton Univer-
sity Press, Princeton, NJ, USA.
McCulloch, W. S. and Pitts, W. (1943). A logical calculus
of the ideas immanent in nervous activity. Bulletin of
Mathematical Biophysic, 5:115–133.
Minsky, M. L. (1967). Computation: finite and infinite ma-
chines. Prentice-Hall, Inc., Upper Saddle River, NJ,
USA.
Siegelmann, H. T. (1999). Neural networks and analog
computation: beyond the Turing limit. Birkhauser
Boston Inc., Cambridge, MA, USA.
Siegelmann, H. T. and Sontag, E. D. (1994). Analog com-
putation via neural networks. Theor. Comput. Sci.,
131(2):331–360.
Siegelmann, H. T. and Sontag, E. D. (1995). On the com-
putational power of neural nets. J. Comput. Syst. Sci.,
50(1):132–150.
van Leeuwen, J. and Wiedermann, J. (2001a). Beyond the
Turing limit: Evolving interactive systems. In SOF-
SEM 2001: Theory and Practice of Informatics, vol-
ume 2234 of LNCS, pages 90–109. Springer Berlin /
Heidelberg.
van Leeuwen, J. and Wiedermann, J. (2001b). The Tur-
ing machine paradigm in contemporary computing. In
Mathematics Unlimited - 2001 and Beyond, LNCS,
pages 1139–1155. Springer-Verlag.
van Leeuwen, J. and Wiedermann, J. (2008). How we
think of computing today. In Logic and Theory of
Algorithms, volume 5028 of LNCS, pages 579–593.
Springer Berlin / Heidelberg.
Wegner, P. (1998). Interactive foundations of computing.
Theor. Comput. Sci., 192:315–351.
INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING
333