expressed in temporal logic. In case the property does
not hold in the system, the user can have access to a
counter-example, that is, an execution trace falsify-
ing the property at issue, which often helps in finding
modifications in the model for the property to be sat-
isfied. In order to apply model-checking techniques
efficiently, the model to handle should be as small as
possible.
Taking advantage of this modeling and verifi-
cation framework, we introduce a novel algorithm
which aims at reducing the number of neurons and
synaptical connections of a given neural network.
The proposed reduction preserves the desired dynam-
ical behavior of the network, which is formalized by
means of temporal logic formulas and verified thanks
to the PRISM model checker. More precisely, a neu-
ron is removed if its suppression has a low impact on
the probability for a given temporal logic formula to
hold. Observe that, other than their utility in lighten-
ing models, algorithms for neural network reduction
have a forthright application in the medical domain.
In fact, they can help in detecting weakly active (or
inactive) zones of the human brain.
The issue of reducing biological networks is not
new in systems biology. Emblematic examples can
be found in (Naldi et al., 2011), where the authors
propose a methodology to reduce regulatory networks
preserving some dynamical properties of the original
models, such as stable states, in (Gay et al., 2010),
where the authors study model reductions as graph
matching problems, or in (Paulev
´
e, 2016), whose au-
thor considers finite-state machines and proposes a
technique to remove some transitions while preserv-
ing all the (minimal) traces satisfying a given reach-
ability property. As far as neural networks are con-
cerned, to the best of our knowledge the core of the
existing reduction approaches only deals with second
generation networks. Several methods to train a net-
work that is larger than necessary and then remove
the superfluous parts, known as pruning techniques,
are explained in (Reed, 1993) . Finally, in (Menke
and Martinez, 2009) the authors introduce an ora-
cle learning methodology, which consists in using a
larger model as an oracle to train a smaller model in
order to obtain a smaller acceptable model. With ora-
cle learning, the smaller model is created initially and
trained using the larger model, whereas with pruning,
connections are removed from the larger model until
the desired size is reached.
The paper is organized as follows. In Section 2
we introduce a probabilistic version of the Leaky In-
tegrate and Fire Model. Section 3 is devoted to the
PRISM modeling language and the temporal logic
PCTL (Probabilistic Computation Tree Logic). In
Section 4 we describe our modeling of neural net-
works as Discrete-Time Markov Chains in PRISM.
2 PROBABILISTIC LEAKY
INTEGRATE AND FIRE
MODEL
We model neuron networks as Boolean Spiking Net-
works, where the electrical properties of neurons
are represented through the Leaky Integrate and Fire
(LI&F) model. In this modeling framework, neural
networks are seen as directed graphs whose nodes
stand for neurons and whose edges stand for synap-
tical connections. Edges are decorated with weights:
positive (resp. negative) weights represent activations
(resp. inhibitions). The dynamics of each neuron is
characterized through its (membrane) potential value,
which represents the difference of electrical potential
across the cell membrane. At each time unit, the po-
tential value is computed taking into account present
input spikes and the previous decayed potential value.
In order to weaken the past potential value, it is mul-
tiplied by a leak factor. In our probabilistic LI&F
model, the probability for each neuron to emit an ac-
tion potential, or spike, is governed by the difference
between the potential value and a given firing thresh-
old. For positive (resp. negative) values of this dif-
ference, the more its absolute value is big, the more
(resp. the less) is the probability to emit a spike. Af-
ter each spike emission, the neuron potential is reset
to zero. In the literature, other ways exist to incorpo-
rate probabilities in LI&F models, such as the Noisy
Integrate and Fire models (Di Maio et al., 2004; Four-
caud and Brunel, 2002), where a noise is added to the
computation of the potential value.
More formally, we give the following definitions
for probabilistic LI&F networks.
Definition 1 (Boolean Probabilistic Spiking Integrate
and Fire Neural Network). A Boolean Probabilis-
tic Integrate and Fire Neural Network is a tuple
(V, E, w), where:
• V are Boolean probabilistic spiking integrate and
fire neurons,
• E ⊆ V ×V are synapses,
• w : E → Q ∩[−1, 1] is the synapse weight function
associating to each synapse (u, v) a weight w
uv
.
We distinguish three disjoint sets of neurons: V
i
(input
neurons), V
int
(intermediary neurons), and V
o
(output
neurons), with V = V
i
∪V
int
∪V
o
.
Definition 2 (Boolean Probabilistic Spiking Integrate
and Fire Neuron). A Boolean Probabilistic Spiking
BIOINFORMATICS 2018 - 9th International Conference on Bioinformatics Models, Methods and Algorithms
90