For methodological purposes we use three afore-
mentioned datasets. The not distorted training dataset
describes the ideal, perfect map. We use this dataset
for quality assessment purposes - differences between
map response and not distorted training dataset in-
form how the reconstructed model differs from the
ideal one.
The not distorted training dataset is never present
in real data. Modeling real-life phenomena is always
connected with distortions of varied nature. There-
fore, the map reconstruction procedure is based on
distorted training dataset. We investigate two differ-
ent strategies that add distortions to the map. Dis-
torted training dataset derives from not distorted one.
Testing dataset is used for map quality assessment.
Test datasets were half the size of train datasets.
In Section 3 we present dependency between map
size, train dataset size and accuracy.
We have tested two kinds of weights matrices:
• weights matrix with values drawn randomly from
the uniform distribution in the [−1, 1] interval,
values are rounded to 2 decimal points,
• weights matrix with a given share of zeros and
other weights drawn randomly from the uniform
distribution.
First kind of weights matrix does not need to be
explained to greater detail. The second kind repre-
sents a map, in which there is certain share of 0s.
Connections evaluated as 0s inform us that there is
no relationship between given nodes. With a weight
equal to 0 we express also lack of knowledge about
relationship between given phenomena. Such maps
are important from the practical perspective. Hence,
we investigate maps based on weights matrices with
given share of 0’s set to: 90%, 80%, 70% and so on.
Activations are real numbers from the [0, 1] inter-
val drawn randomly from the uniform distribution.
To retain comparability whenever it is possible we
use the same datasets. For example, each experiment
for n = 8 (number of nodes) is based on the same ac-
tivations.
2.3 Experiments’ Methodology
In this section we discuss the methodology of FCM
reconstruction process and methodology of the exper-
iments. The training dataset contains distortions. The
goal of our study on distortions in cognitive maps
training is to prepare a model, which may be ap-
plied to describe real-world phenomena. We present
full course of the experiments, including training and
quality evaluation phase.
The course of the full experiment, including vali-
dation, is the following:
• there is an ideal weights matrix W , that describes
the system perfectly. Given are activations X.
,,Ideal” weights and activations produce ideal tar-
gets (,,ideal” T GT ) based on Formula 1,
• the ideal data gets distorted and the perfect
weights matrix is lost,
• The goal is to reconstruct the map based on:
– activations X,
– distorted target T GT
D
.
• with the use of error minimization procedure
based on gradient weights matrix is reconstructed,
• the quality of the reconstructed map is tested on
training and test datasets.
The procedure described above is a general methodol-
ogy of our approach. The map reconstruction process
in the shape as it is on real data is the following:
• given are activations and distorted targets,
• with the use of gradient learning we reconstruct
the map
• the quality of the model is checked on the testing
dataset.
In the following paragraphs we discuss in greater
detail methodology of our approach. We focus on dis-
tortions and collate model quality with the strength of
distortions. The more susceptible is the procedure to
distortions, the better it performs on real data.
2.3.1 FCM Training with Distortions on the
Weights Level
Figure 2 illustrates FCM training and testing proce-
dure with distortions introduced on the weights level.
In this variant of the proposed procedure map re-
construction is based on:
• activations X,
• targets T GT
W (eights)D(istorted)
distorted through
distortions applied to weights.
Training phase adjusts weights matrix W
0
W
so that:
error
T GT
W D
, f sig(τ,W
0
W
· X)
(6)
is minimized.
The training dataset is distorted on the level of
weights. Distortions are then propagated to targets
T GT
W D
. The training procedure overcomes errors,
that are propagating as a result of a prior distortion.
Training procedure uses conjugate gradients
method. In practical experiments we used a version of
conjugate gradient implemented in R. Gradient-based
optimization minimizes error as in Formula 6. We
tested the procedure against several errors. As a re-
sult of the optimization a new weights matrix W
W f in
FuzzyCognitiveMapReconstruction-MethodologiesandExperiments
501