time, while “off” spikes are not and produce only
noise. In the case of STDP learning, under a certain
range of parameters, the strengths of the synapses
associated with the pattern grow, while the strengths
of other synapses which receive only noise decay. In
other words, the individual neuron acts as
coincidence detector (Abbott and Nelson, 2000). In
the simplest case possible, when the pattern is static
and background noise is absent, such training can be
reduced to supervised learning as a simple
assignment operation: set strength to 1 if input is in
the pattern, set to 0 otherwise.
Figure 1: STDP training rules addressed in this paper. w
is the amount of change in synaptic strength; t is time
difference between postsynaptic and presynaptic spikes. a)
STDP rule of excitatory-to-excitatory synapses. b) STDP
rule of excitatory-to-inhibitory synapses. c) Update is
guarded by the nearest–neighbour rule with immediate
pairings only (Burkitt et al. 2004: Model IV).
The STDP rule of excitatory-to-excitatory
synapses (Figure 1a) is the most widely researched
one. In this paper we will refer to this rule as STDP
rule A. When using this rule, and organizing
multiple neurons in a competitive network, that is,
connecting neurons with lateral inhibitory synapses,
it is possible to train that network for multiple
distinct spatiotemporal patterns, where individual
neuron becomes selective for only one of the
patterns. This has been demonstrated by many
authors (Masquelier et al., 2009; Song et al., 2000;
Guyonneau et al., 2005; Gerstner and Kistler, 2002).
Such a network is capable of learning even if the
pattern is highly obscured by noise (Masquelier et
al., 2008, 2009). SDTP learning of spatiotemporal
patterns holds potential for practical pattern
recognition, something explored by other authors
(Gupta and Long , 2007; Nessler et al., 2009; Hu et
al., 2013; Kasabov et al., 2013).
In this paper we address the problem associated
with levels of noise injected during the training of a
neuron. Values of the neuron threshold, amplitude of
relative refraction and initial synaptic strengths
might be optimal only for a certain range of amounts
of injected noise. These parameters define the initial
spiking rate of the neuron (See Methods and
Parameters for further details). This means the level
of noise must be known beforehand, so the
parameters can be set accordingly. It could be a real
problem if the level of noise changes over time. To
overcome this problem, we introduced inhibitory
neurons which received excitatory input from the
same neurons as the training neuron. We used an
inverted STDP rule for excitatory-to-inhibitory
synapses (Figure 1b). In this paper we refer to this
rule as STDP rule B.
A similar rule of excitatory-to-inhibitory
synapses has been discovered in a cerebellum-like
structure of an electric fish (Bell et al., 1997) and in
mice (Tzounopoulos et al. 2004, 2007). The rule in
Figure 1b is not precisely the same: in the electric
fish LTD gradually becomes LTP, while in mice
there was zero LTP.
We found the model of an inhibitory neuron with
the inverted STDP learning rule is capable of
adjusting its response rate to a particular level of
noise. In this paper we suggest a method that uses an
inverted SDTP learning rule to modulate spiking rate
of the trained neuron. This method is adaptive to
noise levels; subsequently spiking neuron can be
trained to learn the same spatiotemporal pattern with
a wide range of background noise injected during
the learning process.
2 SOME PROPERTIES OF THE
INVERTED STDP RULE
2.1 Training for Poisson Noise
We exposed neurons with the different threshold
values to Poisson noise. Each trained neuron
received input from 4,096 input neurons which
produced Poisson noise by producing an input spike
with a probability of 0.02 at each discrete step in the
simulation. STDP rules A and B were compared.
Results are represented in Figure 2. See Methods
and Parameters for further details.
When exposed to Poisson noise only, STDP rule
A, as expected, leads to two possible outcomes:
either synaptic strengths decay until the neuron is
not capable of firing, or all synaptic strengths grow
and the neuron is activated by any random spike
from the input.
The behavior of inverted rule B is far more
NCTA2014-InternationalConferenceonNeuralComputationTheoryandApplications
166