and output layers represent the electrodes that regis-
ter the EEG time series. And two hidden layers repre-
sent the brain centers with some interaction structure
between them that is expressed by synaptic weights
between these layers. We consider that just this inter-
action schema generates electric potentials which are
registered on the scalp. Now, let’s describe each layer
in detail.
Network input layer has m neurons, where m is
number of EEG channels in the model. Synaptic
weights vector of each neuron from this layer are
constant (red links at the Figure 1) during network
training. For example, it is (0, 0, . . . , 0, 1, 0)
T
for
(m − 1)-th input neuron and m-th neuron activation
corresponds to m-th channel electrode potential. First
and second network hidden layers have n neurons
each, where n is the number of brain centers in the
model. So, each brain center corresponds to two neu-
rons from two hidden layers in the model. Synap-
tic weights of neurons from the second hidden layer
are also constant. We can interpret this as feedback
from output electric potentials to brain centers. On
the other hand it is necessary to have such network
structure to train network so it could reproduce initial
EEG. Second hidden layer has also n neurons. And its
synaptic weights are changing during training phase.
In particular both hidden layers present interaction be-
tween brain centers. So we interpret hidden layers
as brain centers with activating or inhibiting connec-
tions. And we aim for obtaining this internal interac-
tion schema via training synaptic weights of the sec-
ond hidden layer. Finally, the network’s output layer
has m neurons as its input layer. Outputs are electric
potentials being registered on the scalp by electrodes.
For simplicity we consider the output neurons as the
electrodes. Output layer’s synaptic weights are also
constant and equal to synaptic weights of the first hid-
den layer. Thus interaction between brain centers and
electrodes is symmetrical. So we have the following
model of electric potential generating process: being
in some initial state, brain centers - 2-d and 3-rd net-
work layers - respond to input activation, configure
their internal connections and reproduce the appropri-
ate EEG signal. We assume that i-th brain center has
influence on j-th electrode that is inverse-proportional
to the square of the distance between them:
φ(i, j) ∼
1
ρ
2
(i, j)
(1)
ρ(i, j) is the specified distance that is represented by
the fixed synaptic weight of any neuron in first hid-
den and output layers. One should note that a certain
choice of proportionality coefficient can badly affect
generalization capability of the neural network. Also
the choice of brain centers coordinates is the corner
stone of our BCNN-model. In our experiments we use
a linear independent matrix of second hidden layer’s
synaptic weights.
The BCNN-model is supposed to be trained by
time series that are EEG samples from electrodes. In
the BCNN-model the number of input layer neurons
is equal to the number of electrodes and correspond-
ing EEG channels. So as stated above we have one-
to-one correspondence between i-th input and output
neurons and i-th electrode. The main goal of the
training phase is to obtain such brain centers inter-
action weights so they make the tuned model suit-
able for EEG reproduction. We use a modification of
the error back propagation method (Haykin, 1998) as
the training method. Here some synaptic weights are
being kept constant during network training and the
error back propagation process varies only synaptic
weights between hidden layers. To specify the model
in full we say that each neuron activation function is
a bipolar sigmoid due to its symmetry. Normalization
of input vector is done by the following simple linear
transformation:
t(x) =
2
x
max
− x
min
· (x − x
min
) − 1.0 (2)
t
−1
(y) =
(y + 1.0) · (x
max
− x
min
)
2
+ x
min
(3)
During the learning phase we use following instruc-
tions. Starting with any initial input vector we aim
to obtain the vector of first samples of EEG time se-
ries array. After one pass through the network by the
modified error back propagation method we proceed
to using the vector with first EEG samples as the in-
put. At this point we specify as ideal output a vector
that consists of second samples and so on. After one
learning epoch (one pass through all EEG time series
samples) is over we start it again.
3 EXPERIMENTS
We used the following data for our first experiments:
an EEG of a person whose eyes were open (“Opened
Eyes”), an EEG of a person whose eyes were closed
(“Closed Eyes”) and an EEG of a person that was
watching fractal pictures (“Fractals”). These EEG
recordings were taken from sixteen electrodes and
were 17 seconds long each. The sampling rate in
analogue-digital conversion was 250 measurements
per second. EEG recording were preprocessed in the
following way: artifacts were deleted and then a band-
pass filter (1-70 Hz), a notch filter (50 Hz) and reason-
able smoothing were applied. The experiment was set
up in the following way. We set bipolar sigmoid pa-
rameter to value 0.2 and learning rate to value 1.5.
BRAIN CENTERS MODEL AND ITS APPLICATIONS TO EEG ANALYSIS
481