proven to be a valuable tool for theoretical analysis
of correlated and chaotic network activity, stability
and other large-scale network properties. However,
in my opinion, its application to the above
mentioned problem will be very limited for the
following reasons:
Even for relatively simple neuron models the
mean-field equations may take form of system
of complex integro-differential equations, which
cannot be solved analytically (for example,
when synaptic delays are non-zero and vary
from synapse to synapse). Although their
general solution is not required for some
purposes (e. g. for stability analysis) in most
cases it has to be obtained by numeric methods.
The mean-field approach is based on the
assumptions which are often unrealistic. It is
assumed that number of neurons is infinite. But
consequences of network size finiteness, so
called finite size effects, may be very significant
even for large networks making estimations
obtained by classic mean-field equations
imprecise (Touboul & Ermentrout, 2011). There
are other situations violating basic conditions
for application of this method – for example,
presence of numerous small populations of
neurons with highly correlated activity like in
Izhikevich’s models of neural information
processing and memory based on
polychronization effect (Izhikevich, 2006).
As a rule, creation and analysis of mean-field
equations require substantial research efforts. In
fact, it is a small (or even large) research project
in every case. A minor complication of explored
problem – say, addition of some correlations in
originally Poisson external signal may lead to
dramatic complication of the equations
analyzed. If demand for this kind of study will
be great then much simpler alternative methods
will be required.
The main idea of this paper is that the basic
instrument for creation of networks with specified
required parameters should be empirical models –
formulae expressing dependences of the parameters
describing network activity (the output parameters)
on the variables controlled by network designers –
such as number of excitatory and inhibitory neurons
and synapses, constants in distribution laws for
synaptic weights and delays, individual neuron
parameters etc. (the input parameters). These models
are obtained as a result of automated analysis of
experimental data by data mining algorithms. It is
assumed that the routine semi-automated procedure
for finding these empirical dependences should
include the following steps:
1. Determination of input and output parameters
which could enter the sought models. For the input
parameters it is also necessary to set their possible
variation ranges. The input parameters should not
include extensive variables directly depending on
network size. For example, percent of inhibitory
neurons should be used instead of absolute number
of inhibitory neurons. It is necessary in order to
make the built models scalable.
2. Performing experiments with moderate size
networks and various combinations of the input
parameter values. Number of these experiments
should be sufficient to cover all interesting regions
of the input parameter space and to avoid possible
model overfitting. The good starting point for this
choice is the rule that number of experiments should
be at least 2 orders of magnitude greater than
number of model degrees of freedom. The very
important factor is size of networks used in these
experiments. Since many interesting processes in
SNNs are statistical by their nature it is senseless to
experiment with small networks and expect that the
obtained results will be valid for large SNNs as well.
On the other side, the network should be much
smaller than the target simulated network –
otherwise the whole process would not make sense.
Probably, networks consisting of thousands neurons
would be a good trade-off in many cases. Input
parameter values in these experiments can be set in
accordance with various strategies – random setting,
placement on a grid and so on.
3. Analysis of the tables consisting of input
parameter values and corresponding output
parameter magnitudes measured in the experiments.
It can be done using various data mining algorithms
– this step is considered in next sections.
4. Model scalability verification. Even in case
when the models do not include variables directly
depending on network size, it may be that size of
networks used in these experiments series is
insufficient to reveal important statistical effects or
causes too strong fluctuations distorting the
dependencies sought. In order to test model
scalability a limited number of experiments with
larger networks should be carried out.
This scheme has a number of obvious
advantages. It is semi-automatic and can be
routinely used for a great variety of network
architectures, input signals etc., it produces the
results in the explicit analytical form which can be
used for further analysis (possible by means of
symbolic math software because the found empirical
EmpiricalModelsasaBasisforSynthesisofLargeSpikingNeuralNetworkswithPre-SpecifiedProperties
265