Recurrent Neural Networks
A Natural Model of Computation beyond the Turing Limits
J´er´emie Cabessa and Alessandro E. P. Villa
Department of Information Systems, University of Lausanne, CH-1015 Lausanne, Switzerland
Keywords:
Neural Computation, Analog Computation, Interactive Computation, Recurrent Neural Networks, Super-
Turing.
Abstract:
According to the Church-Turing Thesis, the classical Turing machine model is capable of capturing all possible
aspects of algorithmic computation. However, in neural computation, several basic neural models were proven
to be capable of computational capabilities located beyond the Turing limits. In this context, we present an
overview of recent results concerning the super-Turing computational power of recurrent neural networks,
and show that recurrent neural networks provide a suitable and natural model of computation beyond the
Turing limits. We nevertheless don’t draw any hasty conclusion about the controversial issue of a possible
predominance of biological intelligence over the potentialities of artificial intelligence.
1 INTRODUCTION
Understanding the intrinsic nature of biological in-
telligence appears to be among the most challenging
issues, with considerable repercussions ranging from
theoretical and philosophical considerations to practi-
cal implications in the fields of artificial intelligence,
machine learning, bio-inspired computing, etc. In this
context, much interest has been focused on compar-
ing the computational capabilities of brain-like mod-
els and abstract machines.
This comparative approach has been initiated by
(McCulloch and Pitts, 1943) who proposed a mod-
elization of the nervous system as a finite intercon-
nection of logical devices. For the first time, neu-
ral networks were considered as discrete abstract ma-
chines, and the issue of their computational capa-
bilities investigated from the automata-theoretic per-
spective. These considerations were further pursued
by (Kleene, 1956), (Von Neumann, 1958), (Rosen-
blatt, 1957), (Minsky, 1967), and (Minsky and Papert,
1969) who opened up the way to the theoretical ap-
proach to neural computation.
Along these lines, it is nowadays well-known that,
depending on several aspects under consideration, the
computational power of diverse neural models may
notably range from finite state automata up to super-
Turing capabilities.
Here, we present an overview of recent results
concerning the super-Turing computational power of
recurrent neural networks, and show that recurrent
neural networks provide a natural model of compu-
tation beyond the Turing limits. More precisely, the
specific super-Turing model of a Turing machine
with advice” seems to have a natural fit to capture the
computational capabilities of basic brain-like mod-
els, capable of capturing some biological mechanisms
that cannot be apprehended by the classical Turing
machine model but are most significantly involved in
neural information processing.
We nevertheless don’t draw any hasty conclusion
about the controversial issue of a possible predomi-
nance of biological intelligence over the potentialities
of artificial intelligence. Still, we believe that such
an approach might in the long term improve the un-
derstanding of the intrinsic natures of both biological
and artificial intelligences.
2 CLASSICAL AND
INTERACTIVE COMPUTATION
The classical Turing paradigm of computation corre-
sponds to the computational scenario where a system
receives a finite input, processes this input, and either
provides a corresponding output or never halts.
In this framework, the concept of a Turing ma-
chine (TM) provides a relevant model of computa-
tion to understand the limits of mechanical computa-
594
Cabessa J. and E. P. Villa A..
Recurrent Neural Networks - A Natural Model of Computation beyond the Turing Limits.
DOI: 10.5220/0004172905940599
In Proceedings of the 4th International Joint Conference on Computational Intelligence (NCTA-2012), pages 594-599
ISBN: 978-989-8565-33-4
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
tion (Turing, 1936)
1
. According to the Church-Turing
Thesis, the Turing machine model actually captures
all possible aspects of algorithmic computation.
The concept of a Turing machine with advise
(TM/A) provides a model of computation beyond the
Turing limits which plays an important role in the
computational power of recurrent neural networks. It
consists of a classical Turing machine provided with
an additional advise function α : N {0, 1}
+
as
well as an additional advise tape, and such that, on
every input u of length n, the machine first copies the
advise word α(n) on its advise tape and then contin-
ues its computation according to its finite Turing pro-
gram. A Turing machine with polynomial-bounded
advise (TM/poly(A)) consists of a TM/A whose ad-
vice length is bounded by some polynomial. Turing
machines with (polynomial) advice are strictly more
powerful than Turing machines.
But the classical Turing computational approach
has nowadays been argued to “no longer fully cor-
responds to the current notion of computing in
modern systems”, especially when it refers to bio-
inspired complex information processing systems
(Van Leeuwen and Wiedermann, 2001; Van Leeuwen
and Wiedermann, 2008). Indeed, in organic life, in-
formation is rather processed in an interactive way,
where previous experience must affect the perception
of future inputs, and where older memories them-
selves may change with response to new inputs. Ac-
cording to these considerations, the alternative frame-
work of interactive computation appears to be par-
ticularly relevant for the consideration of natural or
biological computational phenomena (Goldin et al.,
2006).
The general interactive computational paradigm
consists of a step by step exchange of information
between a system and its environment. In order
to capture the unpredictability of next inputs at any
time step, the dynamically generated input streams
need to be modeled by potentially infinite sequences
of symbols (the case of finite sequences of symbols
would necessarily reduce to the classical computa-
tional framework) (Wegner, 1998; Van Leeuwen and
Wiedermann, 2008). In most basic scenarios, the en-
vironment sends a non-empty input bit to the system
at every time step (full environment activity condi-
tion), the system next updates its current state ac-
cordingly, and then either produces a corresponding
1
We briefly recall that a Turing machine consists of a in-
finite tape, a head that can read and write on this tape, and
a finite program which, according to the current computa-
tional state of the machine and the current symbol read by
the head, determines the next symbol to be written by the
head on the tape, the next move of the head (left or right),
and the next computational state of the machine.
output bit, or remains silent for a while to express
the need of some internal computational phase before
outputting a new bit, or remains silent forever to ex-
press the fact that it has died.
In this context, an interactive Turing machine (I-
TM) consists of a classical Turing machine, yet pro-
vided with input and output ports rather than tapes in
order to process the interactive sequential exchange of
information between the device and its environment
(Van Leeuwen and Wiedermann, 2001).
Moreover, an interactive Turing machine with ad-
vice (I-TM/A) M consists of an interactive Turing
machine provided with an advice mechanism which
takes the form of an advice function α : N {0, 1}
(Van Leeuwen and Wiedermann, 2001). The machine
M uses two auxiliary special tapes, an advice input
tape and an advice output tape, as well as a desig-
nated advice state. During its computation, M can
write the binary representation of an integer m on its
advice input tape, one bit at a time. Yet at time step
n, the number m is not allowed to exceed n. Then, at
any chosen time, the machine can enter its designated
advice state and then have the finite string α(m) be
written on the advice output tape in one time step, re-
placing the previous content of the tape. The machine
can repeat this extra-recursivecalling process as many
times as it wants during its infinite computation. In-
teractive Turing machines with advice were proved to
be strictly more powerful than interactive Turing ma-
chines (Van Leeuwen and Wiedermann, 2001).
3 RECURRENT NEURAL
NETWORKS
Throughout this paper, a recurrent neural network
(RNN) consists of a synchronous network of neurons
(or processors) related together in a general architec-
ture not necessarily loop free or symmetric. The
network contains a finite number of neurons (x
j
)
N
j=1
,
as well as M parallel input lines carrying the input
stream transmitted by the environment, and P desig-
nated output neurons among the N whose role is to
communicate the output of the network to the envi-
ronment. At each time step, the activation value of
every neuron is updated by applying a linear-sigmoid
function to some weighted affine combination of val-
ues of other neurons or inputs at previous time step.
Formally, given the activation values of the inter-
nal and input neurons (x
j
)
N
j=1
and (u
j
)
M
j=1
at time t,
the activation value of each neuron x
i
at time t + 1 is
then updated by the following equation
RecurrentNeuralNetworks-ANaturalModelofComputationbeyondtheTuringLimits
595
x
i
(t + 1) = σ
N
j=1
a
ij
· x
j
(t) +
M
j=1
b
ij
· u
j
(t) + c
i
!
for i = 1, . . . , N, where all a
ij
, b
ij
, and c
i
are num-
bers describing the weighted synaptic connections
and weighted bias of the network, and σ is the clas-
sical saturated-linear activation function defined by
σ(x) = 0 if x < 0, σ(x) = x if 0 x 1, and σ(x) = 1
if x > 1.
In order to allow mathematical comparison with
the languages computed by abstract models of com-
putation – e.g. Turing machines and Turing machines
with advice in our case the study of the computa-
tional power of RNNs involves the consideration of
a specific model of RNNs that performs computation
and decision of formal languages. For this purpose,
(Siegelmann and Sontag, 1995) considered a notion
of formal RNN which adheres to a rigid encoding of
the way binary strings are processed as input and out-
put between the network and the environment.
The nature of the synaptic weights under consid-
eration has been proved to play a fundamental role in
the computational power of neural networks. Hence,
a recurrent neural network will be called rational (de-
noted by RNN[Q]) if all its synaptic weights are ra-
tional numbers. It will be called real or analog (de-
noted by RNN[R]) if all its all synaptic weights are
real numbers. Since rational numbers are real, note
that any rational network is a particular analog net-
work by definition.
Besides this classical neural model, (Cabessa and
Siegelmann, 2012) also introduced the concept of an
evolving recurrent neural network (Ev-RNN) as a re-
current neural network equipped with time-dependent
rather than static synaptic weights. Their dynamics
are therefore governed by equations of the form
x
i
(t +1) = σ
N
j=1
a
ij
(t) · x
j
(t) +
M
j=1
b
ij
(t) · u
j
(t) + c
i
(t)
!
for i = 1, . . . , N, where all a
ij
(t), b
ij
(t), and c
i
(t) are
bounded and time-dependent synaptic weights, and σ
is the classical saturated-linear activation function.
Recently, (Cabessa and Siegelmann, 2012) pro-
posed the possibility to consider all these kinds neu-
ral networks in the more biologically oriented frame-
work of interactive computation. They introduced the
concepts of an interactive recurrent neural network
(I-RNN) and an interactive evolving recurrent neu-
ral network (I-Ev-RNN) as a recurrent neural network
equipped with only one binary input cell and one bi-
nary output cell in order to perform the interactive ex-
change of information between the network and its
environment.
Therefore, all these definitions lead to the con-
siderations of 8 basic models of recurrent neu-
ral networks: RNN[Q]s, RNN[R]s, Ev-RNN[Q]s,
Ev-RNN[R]s, and their interactive counterparts,
namely I-RNN[Q]s, I-RNN[R]s, I-Ev-RNN[Q]s, I-
Ev-RNN[R]s. The following sections show that ana-
log and evolving networks provide natural models of
computation beyond the Turing limits, both in the
classical as well as in the interactive computational
frameworks.
4 RATIONAL RNNS
A first significant breakthrough concerning the com-
putational power of recurrent neural networks has
been made by (Siegelmann and Sontag, 1995) who
chose to focus their attention to more realistic acti-
vation functions for the neurons. They showed that
extending the activation functions of the cells from
boolean to linear-sigmoid drastically increases the
computational power of the networks from finite state
automata up to Turing capabilities. In other words,
they proved that rational recurrent neural networks
(as presented in Section 3) are computationally equiv-
alent to Turing machines.
Theorem 1. RNN[Q]s are computationally equiva-
lent to TMs. More precisely, a language L is decid-
able by some RNN[Q] if and only if L is decidable by
some TM (i.e., iff L is recursive).
(Siegelmann and Sontag, 1995) pointed out sev-
eral interesting consequences of their result. For in-
stance, the problem of determining if a given neuron
ever assumes the value “0” is effectively undecidable,
since the halting problem can be reduced to it. The
problem of determining whether a dynamical system
of the particular form x(t + 1) = σ(A · x(t) + c) ever
reaches an equilibrium point from a given initial state
is also effectively undecidable, for it reduces to the
halting problem as well. Besides, this result provides
a direct proof that higher-order neural networks are
computationally equivalent, up to polynomial time, to
first-order neural networks (higher-order neural net-
works can be simulated by Turing machines which in
turn can be simulated by first-order neural nets).
(Kilian and Siegelmann, 1996) further generalized
the Turing universality of rational neural networks to
a broader class of sigmoidal activation functions. The
computational equivalence between so-called rational
recurrent neural networks and Turing machines has
now become standard result in the field.
Recently, (Cabessa and Siegelmann, 2012) pro-
vided a direct generalization of this result to the more
IJCCI2012-InternationalJointConferenceonComputationalIntelligence
596
biologically oriented framework of interactive com-
putation. They introduced the concept of an interac-
tive recurrent neural network (I-RNN) (as presented
in Section 3), and showed that interactive rational re-
current neural networks are computationally equiva-
lent to interactive Turing machines. They also pro-
vided a precise mathematical characterization of the
translations of bit streams performed by these inter-
active models of computation.
Theorem 2. I-RNN[Q]s are computationally equiva-
lent to I-TMs. More precisely, an ω-translation ϕ is
realizable by some I-RNN[Q] if and only if ϕ is realiz-
able by some I-TM (i.e., iff ϕ is recursive continuous).
5 ANALOG RNNS
(Siegelmann and Sontag, 1994) achieved another im-
portant breakthrough by proposing an approach to
the computational power of recurrent neural networks
from the perspective of analog computation. Follow-
ing von Neumann considerations, they assumed that
the variables appearing in the underlying chemical
and physical phenomena could be modeled by contin-
uous rather than discrete numbers. They introduced
the concept of an analog recurrent neural network as
a classical linear-sigmoid neural net equipped with
real- instead of rational-weighted synaptic connec-
tions (as presented in Section 3). They further showed
that analog recurrent neural networks are strictly more
powerful than their rational counterparts, hence ca-
pable of super-Turing computational capabilities. In
fact, the analog networks can achieve unbounded
power in exponential time of computation (i.e., are
capable of deciding all binary languages), and when
restricted to polynomial time of computation, the net-
works turn out to be computationally equivalent to
Turing machines with polynomial-bounded advice,
thus deciding the complexity class P/poly.
2
Since
P/poly strictly includes the class P and even contains
non-recursive languages, the analog networks are
capable of super-Turing computational power from
polynomial time of computation already.
Theorem 3. RNN[R]s are super-Turing. More pre-
cisely, a language L is decidable in polynomial time
by some RNN[R] if and only if L is decidable in poly-
nomial time by some TM/poly(A) (i.e., iff L P/poly);
furthermore, any language L can be decided in expo-
nential time by some RNN[R].
This analog information processing model turns
2
The complexity classes P and P/poly correspond to the
sets of languages decided in polynomial time by TMs and
TM/poly(A), respectively.
out to be capable of capturing the non-linear dynam-
ical properties that are most relevant to brain dy-
namics, such as rich chaotic behaviors (Kaneko and
Tsuda, 2003; Tsuda, 1991; Tsuda, 2001). More-
over, many dynamical and idealized chaotic systems
that cannot be described by the universal Turing ma-
chine model are also well captured within this analog
framework (Siegelmann, 1995). These considerations
led Siegelmann and Sontag to propose the concept
of analog recurrent neural network as standard in the
field of analog computation, similar to that of an uni-
versal Turing machine in digital computation. They
formulated the so-called Thesis of Analog Computa-
tion an analogous to the Church-Turing thesis, but
in the realm of analog computation stating that no
reasonable abstract analog device can be more power-
ful than first-order analog recurrent neural networks
(Siegelmann and Sontag, 1994; Siegelmann, 1995).
These results might support the opinion that some in-
trinsic dynamical and computational capabilities of
neurobiological systems lie be beyond the scope of
standard artificial models of computation.
(Cabessa and Siegelmann, 2012) provided a gen-
eralization of this result to the context of interactive
computation. They proved that interactive analog re-
current neural networks (as presented in Section 3)
are computationally equivalent to interactive Turing
machines with advice, and also provided a precise
mathematical characterization of the translations of
bit streams performed by these interactive models of
computation. Hence, in the interactive just as in the
classical framework, analog neural networks turn out
to reveal super-Turing computational capabilities.
Theorem 4. I-RNN[R]s are super-Turing. More pre-
cisely, I-RNN[R]s are computationally equivalent to
I-TM/As, and hence realize uncountably many more
ω-translations than I-TMs.
(Cabessa and Villa, 2012) proposed another gen-
eralization of this result in a different intereactive-
like computational framework. They introduced the
concept of an ω-analog recurrent neural network (ω-
RNN[R]) as an interactive recurrent neural network
with real synaptic weights (as presented in Section 3)
yet preforming language recognition over the space
of infinite streams of bits rather than ω-translations
of infinite streams of bits. More precisely, the net-
work receives an infinite input stream of bits from
its environment and produces a corresponding out-
put stream of bits. The input stream is then said to
be accepted by the network if the corresponding out-
put remains forever active, i.e. never shuts down to 0
from some time step onwards. The language recog-
nized by the network is then defined as the set of in-
put streams that are accepted by the network. In this
RecurrentNeuralNetworks-ANaturalModelofComputationbeyondtheTuringLimits
597
context, Cabessa and Villa provided a precise char-
acterization of the expressive power of analog neu-
ral networks, and showed that analog recurrent neural
networks turn out to be strictly more expressive that
deterministic and non-deterministic Turing machines
equipped with B¨uchi or Muller accepting conditions.
Theorem 5. Determnistic ω-RNN[R]s are strictly
more expressive than deterministic B
¨
uchi TMs. Non-
determnistic ω-RNN[R]s are strictly more expressive
than non-deterministic B
¨
uchi or Muller TMs.
6 EVOLVING RNNS
The brain computes, but it does so differently than
today’s computers. Neural memories are updated
when being retrieved in a process called reconsoli-
dation which causes adaptation to changing condi-
tions; the geometric architecture itself changes con-
tinuously as well, with synapses updating their con-
nectivity patterns all the time; current levels of hor-
monal and chemical concentrations change constantly
and affect the computation performed by the neural
architecture. But until recently, these crucial biolog-
ical considerations have generally been neglected in
the classical literature concerning the computational
capabilities of brain-like models. Hence, the follow-
ing questions naturally arise: Can we approach the
issue of the brain’s capabilities from a non-static per-
spective? Can we understand and characterize the
computational capabilities of an ever-changing neural
model?
According to these considerations, (Cabessa and
Siegelmann, 2011) considered a new approach to the
computational power of neural networks from the
perspective of evolving systems. They introduced
a more biologically-oriented model of evolving re-
current neural networks (as presented in Section 3)
where the synaptic weights can evolve rather than stay
static. They further proved that both models of evolv-
ing rational neural networks and evolving real (or
analog) neural networks are actually computationally
equivalent to static analog networks, thus capable of
super-Turing computational capabilities from polyno-
mial time of computation already.
Theorem 6. Ev-RNN[Q]s and Ev-RNN[R]s are
super-Turing. Both models are computationally
equivalent to RNN[[R]s.
Theorems 1, 3, and 6 show that when stepping
from the static to the evolving context, the compu-
tational power of rational neural networks turn out to
be drastically increased from the Turing to the super-
Turing level, whereas the computational capabilities
of analog neural networks actually remain at the same
super-Turing level, equivalent to that of static analog
neural networks. These results support once again the
Thesis of Analog Computation stating that no reason-
able abstract analog device can be more powerful than
first-order analog recurrent neural networks (Siegel-
mann and Sontag, 1994; Siegelmann, 1995).
Moreover, Theorem 6 shows that the consider-
ation of architectural evolving capabilities in a ba-
sic first-order rate neural model provides an alterna-
tive and equivalent way to the consideration of the
power of the continuum towards the achievement of
super-Turing computational capabilities. This feature
is particularly interesting since it allows to replace the
controversial “analog assumption” by natural “evolv-
ing considerations” towards the achievementof super-
Turing computational capabilities of neural networks.
These results also emphasizes the role that the mech-
anisms of evolution and plasticity might indeed play
in the computational capabilities of neural networks.
It is worth noting that the super-Turing capabili-
ties of evolving neural networks can only be achieved
in cases where the evolving synaptic patters are them-
selves non-recursive (i.e., non Turing-computable),
since the consideration of any kind of recursive evolu-
tion would necessarily restrain the corresponding net-
works to no more than Turing capabilities. Hence, ac-
cording to this model, the existence of super-Turing
potentialities of evolving neural networks depends on
the possibility for nature” to realize non-recursive
patterns of synaptic evolution.
Besides, (Cabessa, 2012) generalized once again
these results to the context of interactive computa-
tion. He proved that both models of rational- and
real-weighted interactive evolving neural networks
(as presented in Section 3) are computationally equiv-
alent to interactive Turing machines with advice, and
hence capable of super-Turing capabilities.
Theorem 7. I-Ev-RNN[Q]s and I-Ev-RNN[R]s are
super-Turing. Both models are computationally
equivalent to I-RNN[[R]s, hence to I-TM/As, and thus
realize uncountably many more ω-translations than I-
TMs.
7 CONCLUSIONS
Theorems 3, 4, 5, 6, and 7 show that recurrent neu-
ral networks provide a natural model of computation
beyond the Turing limits. The specific super-Turing
model of a Turing machine with advice seems to have
a natural fit to capture the computational capabilities
of basic brain-like models. Such model provides the
possibility to capture analog and/or evolving consid-
IJCCI2012-InternationalJointConferenceonComputationalIntelligence
598
erations that cannot be apprehended by the classical
Turing machine model yet play a crucial role in many
aspects of neural computation
According to (Siegelmann, 2003), such results
“embeds a possible answer to the superiority of bio-
logical intelligence within the framework of classical
computer science”. We prefer to remain cautious on
this issue, and do not intend to argue in favor of an
ontological existence of super-Turing capabilities of
biological neural networks in nature, but rather in fa-
vor of the relevance of considering super-Turing neu-
ral models in order to describe neurobiological fea-
tures that fail be captured via Turing-equivalent clas-
sical models of computation. In this sense, we be-
lieve that the consideration of super-Turing brain-like
computational models present some interest beyond
the philosophical controversial considerations about
hypercomputation (Copeland, 2004).
Finally, we expect that such theoretical stud-
ies about the computational power of neural models
might lead to a better understanding of the basic prin-
ciples that underlie the processing of information in
the brain. In this context, we believe that the compar-
ative approach between the computational powers of
bio-inspired and artificial abstract models of compu-
tation shall ultimately provide a better understanding
of the intrinsic natures of both biological and artificial
intelligences. We further believe that the foundational
approach to alternative models of computation might
in the long term not only lead to relevant theoretical
considerations, but also to important practical impli-
cations. Similarly to the theoretical work from Turing
which played a crucial role in the practical realization
of modern computers, further foundational consider-
ations of alternative models of computation will cer-
tainly contribute to the emergence of novel computa-
tional technologies and computers, and step by step,
open the way to the next computational era.
REFERENCES
Cabessa, J. (2012). Interactive evolving recurrent neural
networks are super-Turing. In Filipe, J. and Fred, A.,
editors, ICAART 2012: Proceedings of the 4th Inter-
national Conference on Agents and Artificial Intelli-
gence 2012, volume 1, pages 328–333. SciTePress.
Cabessa, J. and Siegelmann, H. T. (2011). Evolving re-
current neural networks are super-Turing. In IJCNN
2011: Proceedings of the International Joint Con-
ference on Neural Networks 2011, pages 3200–3206.
IEEE.
Cabessa, J. and Siegelmann, H. T. (2012). The computa-
tional power of interactive recurrent neural networks.
Neural Comput., 24(4):996–1019.
Cabessa, J. and Villa, A. E. (2012). The expressive power
of analog recurrent neural networks on infinite input
streams. Theor. Comput. Sci., 436:23–34.
Copeland, B. J. (2004). Hypercomputation: philosophical
issues. Theor. Comput. Sci., 317(1–3):251–267.
Goldin, D., Smolka, S. A., and Wegner, P. (2006). Inter-
active Computation: The New Paradigm. Springer-
Verlag, Secaucus, NJ, USA.
Kaneko, K. and Tsuda, I. (2003). Chaotic itinerancy. Chaos,
13(3):926–936.
Kilian, J. and Siegelmann, H. T. (1996). The dynamic uni-
versality of sigmoidal neural networks. Inf. Comput.,
128(1):48–56.
Kleene, S. C. (1956). Representation of events in nerve nets
and finite automata. In Automata Studies, volume 34
of Annals of Mathematics Studies, pages 3–42. Prince-
ton University Press, Princeton, N. J.
McCulloch, W. S. and Pitts, W. (1943). A logical calculus
of the ideas immanent in nervous activity. Bulletin of
Mathematical Biophysic, 5:115–133.
Minsky, M. and Papert, S. (1969). Perceptrons: An Intro-
duction to Computational Geometry. MIT Press.
Minsky, M. L. (1967). Computation: finite and infinite ma-
chines. Prentice-Hall, Inc.
Rosenblatt, F. (1957). The perceptron: A perceiving and
recognizing automaton. Technical Report 85-460-1,
Cornell Aeronautical Laboratory, Ithaca, New York.
Siegelmann, H. T. (1995). Computation beyond the Turing
limit. Science, 268(5210):545–548.
Siegelmann, H. T. (2003). Neural and super-Turing com-
puting. Minds Mach., 13(1):103–114.
Siegelmann, H. T. and Sontag, E. D. (1994). Analog com-
putation via neural networks. Theor. Comput. Sci.,
131(2):331–360.
Siegelmann, H. T. and Sontag, E. D. (1995). On the com-
putational power of neural nets. J. Comput. Syst. Sci.,
50(1):132–150.
Tsuda, I. (1991). Chaotic itinerancy as a dynamical basis
of hermeneutics of brain and mind. World Futures,
32:167–185.
Tsuda, I. (2001). Toward an interpretation of dynamic neu-
ral activity in terms of chaotic dynamical systems. Be-
hav. Brain Sci., 24(5):793–847.
Turing, A. M. (1936). On computable numbers, with an ap-
plication to the Entscheidungsproblem. Proc. London
Math. Soc., 2(42):230–265.
Van Leeuwen, J. and Wiedermann, J. (2001). Beyond the
Turing limit: Evolving interactive systems. In Pa-
cholski, L. and Ruˇzicka, P., editors, SOFSEM 2001:
Theory and Practice of Informatics, volume 2234 of
LNCS, pages 90–109. Springer-Verlag.
Van Leeuwen, J. and Wiedermann, J. (2008). How we think
of computing today. In Beckmann, A., Dimitracopou-
los, C., and L¨owe, B., editors, Logic and Theory of
Algorithms, volume 5028 of LNCS, pages 579–593.
Springer-Verlag.
Von Neumann, J. (1958). The computer and the brain. Yale
University Press, New Haven, CT, USA.
Wegner, P. (1998). Interactive foundations of computing.
Theor. Comput. Sci., 192:315–351.
RecurrentNeuralNetworks-ANaturalModelofComputationbeyondtheTuringLimits
599