through adjustment of several parameters of the Gran-
ular Cognitive Map:
• weights W
fin
,
• ε - single value for all weights or ε matrix with
adjusted value for each weight,
• γ - single value for all weight or γ matrix with fit-
ted value for each weight.
We may adjust simultaneously or successively one of
the above elements or more. In Figure 3 the step of
optimization is in the blue box. In this article we focus
on adjustment of the granularity parameters: ε and γ.
We do not interfere with the weights matrix. Instead,
we try do explore to the greatest extent the benefits
of chosen granular knowledge granules representation
model - intervals.
Coverage maximization task is computationally
challenging. The optimization procedure has to in-
dependently adjust multiple parameters and the max-
imization criteria (see Formulas 11 and 9) are discon-
tinuous. Therefore, we have applied particle swarm
optimization method. PSO (introduced in (Kennedy
and Eberhart, 1995) and (Shi and Eberhart, 1998))
does not require that the optimization problem be dif-
ferentiable. It can search within a very large space
of candidate solutions. The drawback of choosing a
metaheuristics is that we do not have any guarantee
that the optimal solution will be found.
In literature there is a discussion on practical as-
pects of optimization in Fuzzy Cognitive Maps learn-
ing and exploration, for example: (Papakostas et al.,
2012), (Stach et al., 2005) and (Stach et al., 2004).
The topic of Granular Cognitive Maps and optimiza-
tion has not been yet researched and documented.
Optimized Granular Cognitive Map gives new,
granular responses denoted as Y.
The quality of the reconstructed Granular Cog-
nitive Map is assessed on the three aforementioned
datasets. We calculate coverage statistics with respect
to all three datasets before and after the optimization:
• before optimization: coverage of T GT
D
by
Y
ini(tial)
, coverage of T GT by Y
ini
and coverage
of T GT
T (est)
by Y
T(est)ini(tial)
,
• after: coverage of T GT
D
by Y, coverage of T GT
by Y and coverage of T GT
T
by Y
T
.
3 EXPERIMENTS
In this section authors apply the proposed methodol-
ogy in a series of experiments. Different approaches
to Granular Cognitive Map reconstruction were tested
and compared for the same map (n = 8, N = 24, the
same X and T GT
D
datasets). We reconstruct the GCM
by adjustment of granularity parameters: ε and γ for
the interval-based representation of knowledge gran-
ules. Optimization procedure maximizes weak cov-
erage defined in Formula 9. In this paper we adjust
matrix of ε and/or matrix of γ. Matrices contain sepa-
rate parameters for each weight.
Please note that ε
i j
∈ [0, 2], i, j = 1, .. . , n. 2 is
maximal length of the interval for granular weights,
for example in the case when granule center is in 0.
ε defines knowledge granule size. γ
i j
are symmetry
parameters and γ
i j
∈ [0, 1], i, j = 1, . . ., n. For γ = 0.5
the granule is symmetrical and granule center is in the
middle of the interval.
Results presented in this section allow to review
the influence on γ parameter with restricted ε on the
coverage. We use common plotting scheme in each
subsection. The most important aspect of this section
is that as a result of optimization, we substantially
increase coverage and maintain the same generality
of the model.
In Figures in the following section, in each data
point the total specificity of the map before optimiza-
tion is the same as after the optimization. Thanks to
the readjustment methodology we increase coverage
and retain the same balance between specificity and
generality. Such improvement is performed only by a
manipulation with granularity parameters.
The particular GCM reconstruction methodolo-
gies applied and presented in this section are based
on adjustment of:
• ε,
• γ,
• ε and γ successively,
• ε and γ simultaneously.
Optimization was performed with PSO in R with
default parameters. The number of iterations was set
to 4000. Duration of experiments presented in this
section was varying. A single experiment course for
parallel optimization of 64 variables for 10 values of
γ on a standard PC took approximately 15 hours.
The character of the aforementioned datasets is
varied. It was already highlighted that the distorted
train dataset - the one that is used for GCM train-
ing contains 0s and 1s. The model, due to asymptotic
properties of the sigmoid function, cannot cover these
values. It will be easy to spot that for the distorted
train dataset coverage statistics are generally low. Not
distorted train dataset is the ,,ideal” dataset, which de-
scribes perfect map responses. Test dataset contains
separate values that are not related to training data in
any way. Test dataset and ,,ideal” train dataset are
GranularCognitiveMapReconstruction-AdjustingGranularityParameters
179