result of looking at the dynamical system of the
liquid and noting that it is sufficient to cause the
divergence of the two classes in the space of
activation.
Note that the detector systems (e.g. a back
propagation neural network, a perceptron or an
SVM) are not required to have any biological
plausibility; either in their design or in their training
mechanism, since the model does not try to account
for the way the information is used in nature.
Despite this, since natural neurons exist in a
biological and hence noisy environment, for these
models to be successful in this domain, they must be
robust to various kinds of noise. As mentioned
above, Maass (Maass, Natschläger, & Markram,
2002) addressed one dimension of this problem by
showing that the systems are in fact robust to noise
in the input. Thus small random shifts in a temporal
input pattern will not affect on the LSM to recognize
the pattern. From a machine learning perspective,
this means that the model is capable of
generalization.
However, there is another component to
robustness; that of the components of the system
itself.
In this paper we report on experiments
performed with various kinds of "damage" to the
LSM and unfortunately have shown that the LSM
with any of the above detectors is not resistant, in
the sense that small damages to the LSM neurons
reduce the trained classifiers dramatically, even to
essentially random values.
Seeking to correct this problem, we
experimented with different architectures of the
liquid. The essential need of the LSM is that there
should be sufficient recurrent connections so that on
the one hand, the network maintains the information
in a signal, while on the other hand it separates
different signals. The models typically used are
random connections; or those random with a bias
towards "nearby" connections. Our experiments
with these topologies show that the network is very
sensitive to damage because the recurrent nature of
the system causes substantial feedback.
Taking this as a clue, we tried networks with
"hub" or "small world" (Biancon & Barabási, 2001;
Barabás & Albert, Topology of evolving networks:
local events and universality, 2000) architecture.
This architecture has been claimed (Danielle &
Bullmore, 2006; Chklovskii, 2009) to be
"biologically feasible".
The intuition was that the hub topology, on the
one hand, integrates information from many
locations and so is resilient to damage in some of
them; and on the other hand, since such hubs follow
a power rule distribution, they are rare enough that
damage usually does not affect them directly. This
intuition was in fact borne out by our experiments.
2 LSMS ARE NOT ROBUST
2.1 The Experiments
To test this resistance to noise, we downloaded the
code of Maass et al from his laboratory site
1
and
then implemented two kinds of damage to the liquid.
We also reimplemented the LSM code so that we
could handle variants. These models use a kind of
basic neuron that is of the "leaky integrate and fire"
variety and in Maass' work, the neurons are
connected randomly. In addition, some biologically
inspired parameters are added: 20% inhibitory and a
connectivity constraint giving a preference to
geometrically nearby neurons over more remote
ones. (For precise details on these parameters, see:
neural Circuit SIMulator
1
) External stimuli to the
network were always sent to 30% of the neurons,
always chosen to be excitatory neurons.
Initially, we experimented with two parameters:
The percentage of neurons damaged
The kinds of damages.
The kinds were either transforming a neuron into
a "dead" neuron; i.e. one that never fires or
transforming a neuron into a "generator" neuron ,i.e.
one that fire as often as its refractory period allows
it, regardless of its input.
2.2 Results
First, there was not much difference between the
detectors (i.e Back-Propagation, SVM and
Tempotron (Gütig & Sompolinsky, 2006)); so
eventually we restricted ourselves to the Back-
Propagation detector which had inputs of 30
randomly sampled time points of the entire liquid.
(To be fair, none of units of the liquid input were
accessed by the detectors allowed to be input
neurons of the liquid.)
It turned out that while the detector was able to
learn the randomly chosen test classes successfully
with sufficient average connectivity almost any kind
of damage caused the detector to have a very
1
A neural Circuit SIMulator: http://www.lsm.tugraz.at/csim/.
THE LIQUID STATE MACHINE IS NOT ROBUST TO PROBLEMS IN ITS COMPONENTS BUT TOPOLOGICAL
CONSTRAINTS CAN RESTORE ROBUSTNESS
259