Granular Cognitive Map Reconstruction
Adjusting Granularity Parameters
Wladyslaw Homenda
1
, Agnieszka Jastrzebska
1
and Witold Pedrycz
2,3
1
Faculty of Mathematics and Information Science
Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland
2
Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warsaw, Poland
3
Department of Electrical & Computer Engineering
University of Alberta, Edmonton T6R 2G7 AB, Canada
Keywords:
Fuzzy Cognitive Maps, Fuzzy Cognitive Map Reconstruction, Granular Cognitive Maps, Granular Cognitive
Map Reconstruction, Information Granules.
Abstract:
The objective of this paper is to present developed methodology for Granular Cognitive Map reconstruction.
Granular Cognitive Maps model complex imprecise systems. With a proper adjustment of granularity pa-
rameters, a Granular Cognitive Map can represent given system with good balance between generality and
specificity of the description. The authors present a methodology for Granular Cognitive Map reconstruc-
tion. The proposed approach takes advantage of granular information representation model. The objective of
optimization is to readjust granularity parameters in order to increase coverage of targets by map responses.
In this way we take full advantage of the granular information representation model and produce better, more
accurate map, which maintains exactly the same balance between generality and specificity. Proposed method-
ology reconstructs Granular Cognitive Map without loosing its specificity. Presented approach is applied in
a series of experiments that allow evaluating quality of reconstructed maps.
1 INTRODUCTION
Cognitive maps are abstract soft computing mod-
els, which allow describing complex systems flexi-
bly. Cognitive Maps represent knowledge and rela-
tionships within knowledge in a form of a directed
graph. Nodes represent concepts (units or aggre-
gates of information). Edges between the nodes rep-
resent relations between the knowledge gathered in
such map. A very important milestone in research
on cognitive maps is definition of Fuzzy Cognitive
Maps (FCMs) by B. Kosko in 1986, (Kosko, 1986).
Fuzzy Cognitive Maps combine cognitive maps with
fuzzy sets. They became powerful modeling frame-
work and several practical applications of FCMs have
been proposed, (Papageorgiou and Salmeron, 2013),
(Papakostas et al., 2008).
Later research on imprecise information represen-
tation models has brought further generalizations of
knowledge units, (Zadeh, 1997). Granular Comput-
ing has emerged as an important branch in infor-
mation sciences, (Bargiela and Pedrycz, 2003). In-
formation granules generalize units or aggregates of
knowledge. Modeling capabilities of a model built
on the grounds of granular information are widened.
Granularity elevates existing models, by introducing
a controlled balance between specificity of knowledge
granules and generality of the described phenomena.
In this paper the authors discuss Granular Cogni-
tive Maps (GCM), a generalization of cognitive maps
based on knowledge granules. In the start point of
our study we place Fuzzy Cognitive Maps, which get
augmented to Granular Cognitive Maps. Most impor-
tantly, we propose a methodology for Granular Cog-
nitive Map reconstruction.
The paper is structured as follows. In Section 2
we present methodology of Granular Cognitive Map
reconstruction. The proposed approach is applied in
Section 3.
2 METHODOLOGY
In this paper authors introduce a methodology for
Granular Cognitive Map reconstruction. We would
like to stress at the beginning that with our procedure
one is able to build a Granular Cognitive Map based
175
Homenda W., Jastrzebska A. and Pedrycz W..
Granular Cognitive Map Reconstruction - Adjusting Granularity Parameters.
DOI: 10.5220/0004869301750184
In Proceedings of the 16th International Conference on Enterprise Information Systems (ICEIS-2014), pages 175-184
ISBN: 978-989-758-028-4
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)
only on targets and activations (nomenclature will be
discussed in following paragraphs). In this perspec-
tive, we may also call our methodology a Granular
Cognitive Map construction procedure. We intend to
use the term reconstruction, because in our opinion it
reflects better character of the methodology. Granular
Cognitive Maps describe certain systems (sets of re-
lated phenomena). Such system is faithfully modeled
with an ,,ideal” Granular Cognitive Map. Our pro-
cedure aims at reconstruction of this ,,ideal” GCM.
Second premise speaking for the term reconstruction
derives from the validation scheme, which we use to
assess quality of our methodology.
2.1 From a Fuzzy Cognitive Map to a
Granular Cognitive Map
Let us start the discussion with a drawing that visual-
izes a cognitive map. Figure 1 shows a cognitive map
Figure 1: A cognitive map.
consisting of n = 3 nodes. Nodes are connected with
directed edges that represent relationships. In Fuzzy
Cognitive Maps such relationship may take a value of
a real number from the [1, 1] interval. Connections
are gathered in a weights matrix, which is n×n in size
and is denoted as W . Information relevant to FCM
states is available in activations. Activations, denoted
as X , are in n × N matrix, where N is the number of
observations. Activations are forces that are relevant
for n nodes in N observations. Activations take values
of real numbers from [0, 1] interval.
Activations together with weights matrix allow to
compute map responses in N observations, according
to a general formula:
Y = f (W X) (1)
where is an operation performed on matrices W and
X, which produces a matrix W X of size nxN, and
f is a mapping applied to individually to elements of
W X. Matrix product is an example of such opera-
tion and it is utilized in this study.
Let us denote i-th row, j-th column and an element
in i-th row and j-th column of a matrix A as A
i·
, A
· j
and A
i j
, respectively. In order to compute map’s re-
sponse to k-th activations (response in k-th iteration),
we apply a formula:
Y
·k
= f tras(W · X
·k
) (2)
and, more specifically, i-th node response in k-th iter-
ation is computed by multiplying:
Y
ik
= f trans(W
i·
· X
·k
) (3)
where f trans is a nonlinear non-decreasing transfor-
mation function. f trans : R [0, 1]. In this paper we
use sigmoid function:
f sig(z) =
1
1 +exp(τz)
, τ > 0 (4)
We chose the τ parameter equal to 2.5 based on ex-
periments.
Map responses are collated with ,,ideal”, real val-
ues, which are called targets (T GT ). The closer map
responses are to targets, the better the map. Quality
of a Fuzzy Cognitive Map is assessed by calculating
error statistics regarding discrepancies between Y and
T GT .
In our previous research we have proposed
a methodology for Fuzzy Cognitive Map reconstruc-
tion. The proposed procedure is based on gradient-
based error minimization. Outcome of FCM recon-
struction is a weights matrix that describes connec-
tions within a map. Weights are adjusted so that dif-
ferences between Y and T GT are the smallest. Let
us assume that we have such reconstructed FCM.
Thereby, we have optimized weights matrix, denoted
as W
f in
, activations X and targets T GT .
Augmentation to Granular Cognitive Map may
occur at two levels: weights and responses. In our ex-
periments we focus on the first approach, where aug-
mentation to knowledge granules concerns weights.
Research on methodological details on Granular Cog-
nitive Maps in general and on augmentation from
FCM to GCM is in (Pedrycz and Homenda, 2012).
Augmented, granular weights, denoted as W
fin
,
together with activations X produce granular outputs:
Y = f (W
fin
?X) (5)
where ? is a specific operator, applicable to chosen
representation model of knowledge granules.
In Granular Cognitive Maps information (or ag-
gregates of information) are represented with knowl-
edge granules. There are several knowledge descrip-
tion models that may be used to define granules, in-
cluding:
intervals,
triangular fuzzy numbers,
parabolic fuzzy numbers,
others.
ICEIS2014-16thInternationalConferenceonEnterpriseInformationSystems
176
In this study we use intervals for granules representa-
tion.
In the process of Granular Cognitive Maps con-
struction and exploration following elements are rele-
vant: activations (X), granular weights (W
fin
), granu-
lar map responses (Y), additional parameters of gran-
ularity and targets (TGT ).
Elevation to granular information representation
model requires adjustment of additional parameters.
Specifics of granularity parameters depend on chosen
granules representation scheme. In the case of inter-
vals additional parameters, relevant for GCM, are:
size of knowledge granule ε,
symmetry parameter γ.
Construction of a GCM requires a conceptual settle-
ment of a conflict between specificity and generality.
Generality of knowledge granule is directly linked to
the ε parameter - size of the granule. In the case of
interval-based model it is the length of the interval.
The bigger given granule, the more general concept it
can describe. Specificity criterion is at variance with
generality. Specificity is translated to precision. The
more specific knowledge granules, the less fuzzy the
description of the corresponding phenomena or rela-
tions. In the process of a Granular Cognitive Map re-
construction we manipulate with the length of inter-
vals with the aim to achieve a compromise between
generality and specificity.
The γ parameter determines symmetry of knowl-
edge granules. Center of a knowledge granule is
where feature evaluation is at its peak. γ = 0.5 means
that the granule is symmetrical. Following formulas
allow to elevate represented knowledge from fuzzy to
granular model based on intervals:
a
i
= a
i
ε γ |range
i
| (6)
a
+
i
= a
i
+ ε γ |range
i
| (7)
where a
i
and a
+
i
are lower and upper limits of the in-
terval that represents the knowledge granule a. Aug-
mentation from fuzzy to granular model can occur
at various levels. In this study we elevate weights
towards granular weights. As a result, the maximal
length of the interval is 2.
In the case of Granular Cognitive Map quality
assessment we are interested in coverage. Con-
cept of coverage is informally illustrated in Figure 2.
Granular map responses in Figure 2 are denoted as
Figure 2: Coverage of a target by a granular map response.
[y
, y
+
]. These are upper and lower limits of the in-
terval, which represents the granule. In the Figure 2
target is not covered by the map response. We aim
to construct such a map that produces outputs, which
cover as many targets as possible.
A Granular Cognitive Map gives granular map re-
sponses that should cover real, observed values - tar-
gets. There are many possible definitions of coverage.
We would like to discuss two most intuitive kinds of
coverage for an interval-based GCM:
weak coverage,
strict coverage.
Weak coverage account all targets that fall into map
responses. Weak coverage of an i-th node in k-th ob-
servation is calculated as:
covweak
ik
=
(
1, if T GT
ik
[y
ik
, y
+
ik
]
0, otherwise
(8)
Weak coverage for all targets is calculated as follows:
covweak =
N
k=1
n
i=1
covweak
ik
N n
(9)
Weak coverage is averaged by dividing by the number
of observations and the number of nodes. It informs,
to what extent the Granular Cognitive Map covered a
single data point.
In contrast, strict coverage is increased only if all
nodes within k-th observation were covered by map
responses.
covstrict
.k
=
(
1, if T GT
.k
[y
.k
, y
+
.k
]
0, otherwise
(10)
Strict coverage requires that the whole column of tar-
gets is covered by map responses. Strict coverage of
all targets is calculated as follows:
covstrict =
N
k=1
covstrict
.k
N
(11)
Strict coverage punishes columns that were covered
incompletely. It is coverage criterion much harder to
satisfy.
2.2 Methodology for Granular
Cognitive Map Reconstruction
The developed procedure for Granular Cognitive Map
reconstruction aims at finding such a map that max-
imizes coverage of targets by map responses with
given constraints regarding granularity parameters. In
this section we discuss the methodology of the GCM
reconstruction. Moreover, we present a procedure for
validation.
GranularCognitiveMapReconstruction-AdjustingGranularityParameters
177
Figure 3: Granular Cognitive Map reconstruction procedure.
Let us recall that information granularity entails
certain restrictions. The conflict between granule gen-
erality and specificity on one hand limits the model,
but on the other hand it balances precision with ver-
satility. In a Granular Cognitive Map based on in-
tervals as granules representation scheme, general-
ity/specificity is controlled with lengths of intervals.
Figure 3 summarizes the methodology of a GCM
reconstruction and quality assessment.
The algorithm in Figure 3 in its full state recon-
structs a Granular Cognitive Map and assesses its
quality based on 3 datasets:
train not distorted,
train distorted,
test.
Dataset denoted as not distorted train is an ideal, per-
fect, dataset, which is never available in real-life in-
formation processing. It is only available for experi-
ments on artificially generated data. In this paper we
introduce the developed methodology of GCM recon-
struction. Therefore, we use the perfect train dataset
for model quality assessment purposes.
The perfect data is in real life never available. Sys-
tem modeling methodologies take into account two
kinds of distortions: random and systematic. We as-
sume existence of perfect data, but in fact we oper-
ate on distorted dataset. Therefore, the map is trained
with respect to the randomly distorted train dataset.
Random distortions are introduced prior to the ex-
ecution of the GCM reconstruction algorithm. Distor-
tions are added to targets as random values from the
normal distribution with standard deviation equal to
0.4. In consequence, T GT
D
contains significant num-
ber of 0s and 1s, which cannot be model outputs with
sigmoid function as in Formula 4. Due to asymptotic
properties of sigmoid function we cannot expect the
map to reach targets, which are equal to 0 or 1.
Test dataset is used to assess quality of the map.
At the start point in our algorithm we get pre-
trained FCM with weights matrix W
f in
. We have also
activations X and distorted targets T GT
D
. We ele-
vate weights matrix W
f in
to granular weights matrix,
denoted with boldface font: W
fin
using initial val-
ues of granularity parameters ε and γ. Granularity of
information is propagated, activations with granular
weights produce granular map responses Y
ini
.
At this point we have a Granular Cognitive Map
that is not yet optimized. The optimization aims
at coverage maximization and it can be performed
ICEIS2014-16thInternationalConferenceonEnterpriseInformationSystems
178
through adjustment of several parameters of the Gran-
ular Cognitive Map:
weights W
fin
,
ε - single value for all weights or ε matrix with
adjusted value for each weight,
γ - single value for all weight or γ matrix with fit-
ted value for each weight.
We may adjust simultaneously or successively one of
the above elements or more. In Figure 3 the step of
optimization is in the blue box. In this article we focus
on adjustment of the granularity parameters: ε and γ.
We do not interfere with the weights matrix. Instead,
we try do explore to the greatest extent the benefits
of chosen granular knowledge granules representation
model - intervals.
Coverage maximization task is computationally
challenging. The optimization procedure has to in-
dependently adjust multiple parameters and the max-
imization criteria (see Formulas 11 and 9) are discon-
tinuous. Therefore, we have applied particle swarm
optimization method. PSO (introduced in (Kennedy
and Eberhart, 1995) and (Shi and Eberhart, 1998))
does not require that the optimization problem be dif-
ferentiable. It can search within a very large space
of candidate solutions. The drawback of choosing a
metaheuristics is that we do not have any guarantee
that the optimal solution will be found.
In literature there is a discussion on practical as-
pects of optimization in Fuzzy Cognitive Maps learn-
ing and exploration, for example: (Papakostas et al.,
2012), (Stach et al., 2005) and (Stach et al., 2004).
The topic of Granular Cognitive Maps and optimiza-
tion has not been yet researched and documented.
Optimized Granular Cognitive Map gives new,
granular responses denoted as Y.
The quality of the reconstructed Granular Cog-
nitive Map is assessed on the three aforementioned
datasets. We calculate coverage statistics with respect
to all three datasets before and after the optimization:
before optimization: coverage of T GT
D
by
Y
ini(tial)
, coverage of T GT by Y
ini
and coverage
of T GT
T (est)
by Y
T(est)ini(tial)
,
after: coverage of T GT
D
by Y, coverage of T GT
by Y and coverage of T GT
T
by Y
T
.
3 EXPERIMENTS
In this section authors apply the proposed methodol-
ogy in a series of experiments. Different approaches
to Granular Cognitive Map reconstruction were tested
and compared for the same map (n = 8, N = 24, the
same X and T GT
D
datasets). We reconstruct the GCM
by adjustment of granularity parameters: ε and γ for
the interval-based representation of knowledge gran-
ules. Optimization procedure maximizes weak cov-
erage defined in Formula 9. In this paper we adjust
matrix of ε and/or matrix of γ. Matrices contain sepa-
rate parameters for each weight.
Please note that ε
i j
[0, 2], i, j = 1, .. . , n. 2 is
maximal length of the interval for granular weights,
for example in the case when granule center is in 0.
ε defines knowledge granule size. γ
i j
are symmetry
parameters and γ
i j
[0, 1], i, j = 1, . . ., n. For γ = 0.5
the granule is symmetrical and granule center is in the
middle of the interval.
Results presented in this section allow to review
the influence on γ parameter with restricted ε on the
coverage. We use common plotting scheme in each
subsection. The most important aspect of this section
is that as a result of optimization, we substantially
increase coverage and maintain the same generality
of the model.
In Figures in the following section, in each data
point the total specificity of the map before optimiza-
tion is the same as after the optimization. Thanks to
the readjustment methodology we increase coverage
and retain the same balance between specificity and
generality. Such improvement is performed only by a
manipulation with granularity parameters.
The particular GCM reconstruction methodolo-
gies applied and presented in this section are based
on adjustment of:
ε,
γ,
ε and γ successively,
ε and γ simultaneously.
Optimization was performed with PSO in R with
default parameters. The number of iterations was set
to 4000. Duration of experiments presented in this
section was varying. A single experiment course for
parallel optimization of 64 variables for 10 values of
γ on a standard PC took approximately 15 hours.
The character of the aforementioned datasets is
varied. It was already highlighted that the distorted
train dataset - the one that is used for GCM train-
ing contains 0s and 1s. The model, due to asymptotic
properties of the sigmoid function, cannot cover these
values. It will be easy to spot that for the distorted
train dataset coverage statistics are generally low. Not
distorted train dataset is the ,,ideal” dataset, which de-
scribes perfect map responses. Test dataset contains
separate values that are not related to training data in
any way. Test dataset and ,,ideal” train dataset are
GranularCognitiveMapReconstruction-AdjustingGranularityParameters
179
used to assess map quality. The higher the coverage,
the better GCM responses cover targets.
3.1 GCM Reconstruction Through
Adjustment of ε
First possible scenario of optimization is adjustment
of the ε matrix. In this scenario other parameters re-
main without any modification, we focus on the ba-
sic granularity parameter: knowledge granule size.
The optimization concerns n
2
values of ε, separate for
each weight.
The optimization procedure was performed 11
times for varying values of the symmetry parameter
γ, starting from 0 to 1 by 0.1. As a result we were
able to plot results in 3D perspective. The plots are
cut and rotated to illustrate outputs in the most conve-
nient way. In each plot coverage is collated with γ and
ε. Values of γ can be read directly from γ axis. Values
of ε are illustrated in less straightforward way.
In the case, when we adjust ε in 3D plots one can
see coverage versus restricted ε. This restrictions con-
cern two aspects:
upper limit of individual ε values: ε
i j
6 (2 x)
sum of all ε:
n
i=1
n
j=1
ε
i j
6 (n
2
x)
where x is on the ε axis. Introducing named re-
strictions allows to manipulate with balance between
specificity and generality. With the proposed proce-
dure we benefit from the granular information repre-
sentation model to the greatest extent, by maintaining
flexibility of the phenomena description. The sum of
all ε describes generality of the whole map, while re-
strictions on individual ε control specificity of a single
information granule.
Figure 4 illustrates weak and strict coverage be-
fore and after adjustment of the ε matrix. As one may
expect, the highest values of coverage are for lenient
limits on the ε. The stronger we limit the generality
criterion, the lower coverage we get.
The poorest results are for distorted train dataset.
This is because T GT
D
has a lot of 1s and 0s, which
cannot be covered by the model.
Second important observation is that highest cov-
erage for the training datasets is for symmetrical gran-
ules, around γ = 0.5. For the test dataset optimal value
of γ is slightly smaller.
The methodology based on ε matrix adjustment
produces moderately good results. One can see that
the optimization produced generally better models,
but not in each case. For the ,,ideal” train dataset
(first column) we have increased the coverage in al-
most each case. Similarly, strict coverage was im-
proved. The worst results are for strict coverage on
distorted dataset.
3.2 GCM Reconstruction Through
Adjustment of γ
In this approach we reconstruct the Granular Cogni-
tive Map by adjustment of the alternative granularity
parameter: γ. In consequence, we allow asymmetrical
granular weights W
f in
. Sizes of the intervals, which
represent knowledge granules remain the same. We
look for the optimal center of the granule and move
proportionally left and right limits of the intervals.
The results: improvement in coverage after γ ad-
justment are illustrated in Figure 5. Plots in Figure 5
are not directly comparable with plots in the previous
subsection, because optimization procedure allowed
to modify the γ matrix.
This approach to GCM reconstruction gave satis-
fying results. Coverage has improved in each case.
Noteworthy is very good coverage on not distorted
train dataset and test dataset. Manipulation with sym-
metry parameters resulted high coverage for rigorous
specificity criterion. The initial values of γ (on the
horizontal axis) were reset. Chosen optimization al-
gorithm (PSO) is well suited for our approach. Weak
coverage for the same value of ε on the ideal train
dataset are similar for each initial value of γ. The pro-
cedure is stable and it produces comparable results.
From the theoretical point of view, adjustment of
γ in the procedure of Granular Cognitive Map recon-
struction plays secondary role. Symmetry parameter
does not play any role in the most crucial problem
of granularity: balance between generality and speci-
ficity. Nevertheless, it was shown, that adjustment of γ
improves coverage. Therefore, methodology of Gran-
ular Cognitive Map reconstruction should take into
account both granularity parameters and take full ad-
vantage of granular knowledge representation model.
Based on this premises, in the following subsec-
tions we present GCM reconstruction methodologies,
that adjust both ε and γ matrices.
3.3 GCM Reconstruction Through
Successive Adjustment of ε and γ
In order to fully benefit from the assumed granular
knowledge representation model, we propose an ap-
proach to Granular Cognitive Map reconstruction that
is based on adjustment of both granularity parameters:
granules size ε and symmetry γ. Figure 6 illustrates
differences in coverage prior and after the optimiza-
tion of weak coverage.
ICEIS2014-16thInternationalConferenceonEnterpriseInformationSystems
180
Figure 4: Weak (top row) and strict (bottom row) coverage on not distorted train dataset (first column), distorted train dataset
(second column) and test dataset (third column) before and after adjustment of the ε matrix.
In the GCM reconstruction scheme discussed in
this subsection firstly we adjust the size of knowledge
granules with respect to restrictions on the sum of all ε
(
n
i=1
n
j=1
ε
i j
6 (n
2
x)) and restrictions on the indi-
vidual value of each ε
i j
(ε
i j
6 (2 x)). Subsequently,
values of γ are adjusted. In other words, in the first
step we maximize coverage and maintain the balance
between generality and specificity. Next, we tune the
map by adjustment of granules symmetry.
Optimization procedure produced more accurate
GCMs. Coverage on each dataset was better, than if
we adjust only ε matrix. In contrast, adjustment of
only γ parameters gave similar results. For the largest
values of granule size, weak coverage on the ideal
dataset gets saturated to 1. Most importantly, cov-
erage on the test dataset has improved. Presented ap-
proach gives satisfying results.
3.4 GCM Reconstruction Through
Simultaneous Adjustment of ε and γ
The last approach to Granular Cognitive Map recon-
struction, which we discuss in this article is simulta-
neous adjustment of both granularity parameters: ε
and γ. Please note that ε values are limited by re-
strictions on the sum and on the individual value (as
mentioned in the previous subsections). Figure 7 il-
lustrates improvements in coverage provided by the
GCM before and after the reconstruction procedure.
The proposed procedure produces better fitted
Granular Cognitive Maps that do not loose prior bal-
ance between generality and specificity.
The GCM reconstruction strategy based on si-
multaneous adjustment of ε and γ is very successful.
Weak and strict coverage has improved in each case.
On the ,,ideal” training dataset weak coverage reaches
1 even for strong restrictions on the size of knowledge
granules. Weak coverage on the test dataset has im-
proved as well. Strict coverage is much harder to ob-
tain, since it requires all nodes in given observation to
GranularCognitiveMapReconstruction-AdjustingGranularityParameters
181
Figure 5: Weak (top row) and strict (bottom row) coverage on not distorted train dataset (first column), distorted train dataset
(second column) and test dataset (third column) before and after adjustment of the γ matrix.
be covered by map response. Nevertheless, strict cov-
erage is also satisfactory. The only dataset that has
still rather poor coverage is distorted training dataset.
This is no surprise though. Distorted dataset contains
0s and 1s, which cannot be covered by map response.
3.5 Comparison of Presented
Approaches to GCM Reconstruction
In this subsection we summarize and evaluate applied
approaches to Granular Cognitive Map reconstruc-
tion. Table 1 compares mean weak coverage on the
three discussed datasets.
Table 1: Comparison of proposed approaches to Granular
Cognitive Map reconstruction.
mean of weak coverage
approach train ND train D test
adjustment of ε 0.775 0.413 0.658
adjustment of γ 0.892 0.466 0.776
adjust. of ε then γ 0.894 0.456 0.761
adjust. of ε and γ 0.917 0.461 0.790
Table 1 illustrates differences in mean weak cover-
age for the 4 different approaches to Granular Cogni-
tive Map reconstruction discussed in this paper. Val-
ues in Table 1 are means of weak coverage plotted
in Figures 4, 5, 6 and 7. With this statistics we may
quantitatively compare applied methodologies.
The most successful strategy of Granular Cogni-
tive Map reconstruction adjusts ε and γ simultane-
ously. In the last row in Table 1, mean weak cover-
ages are the highest. The poorest results are when we
adjust only the ε matrix. Quality of Granular Cog-
nitive Maps reconstructed by adjustment of γ matrix
and by successive adjustment of ε and the γ is similar.
Coverage is the worst for the distorted train
dataset. This is because of specifics of this dataset
(it contains 0s and 1s). We have chosen such train-
ing dataset on purpose. As a result, the reconstructed
GCMs are not overfitted. The developed methodol-
ogy for experiments with GCM reconstruction was
properly constructed, as it produced high coverages
for the ,,ideal” dataset and for the test dataset.
The proposed approach was focused on Granular
ICEIS2014-16thInternationalConferenceonEnterpriseInformationSystems
182
Figure 6: Weak (top row) and strict (bottom row) coverage on not distorted train dataset (first column), distorted train dataset
(second column) and test dataset (third column) before and after successive adjustment of ε and then γ.
Cognitive Map reconstruction with given restrictions
on the generality/specificity balance criterion. The
aim of the discussed strategies of optimization was to
produce better-fitted Granular Cognitive Maps that do
not loose prior balance between generality and speci-
ficity. Such requirements were implemented through
explicit limitations on values of individual ε and on
sum of all ε. As outcome, reconstructed maps cover
more targets, but maintain required balance that de-
termines their precision.
4 CONCLUSIONS
In the article authors have proposed a general method-
ology for Granular Cognitive Map reconstruction.
Presented approach has been concertized with 4 dis-
tinct GCM optimization schemes. We have discussed
and compared Granular Cognitive Map reconstruc-
tion procedures that adjust granularity parameters.
Theoretical assumptions of the proposed methodol-
ogy focus on most important indicators of granularity:
balance between generality and specificity.
The paper is supported by a series of experiments,
which illustrate results of the proposed Granular Cog-
nitive Map reconstruction procedures. We showed
that proper optimization schemes allow increasing
coverage and maintaining the same generality of a
whole map. GCM reconstruction strategy that pro-
duces the most accurate model - a map that covers the
greatest number of targets is simultaneous adjustment
of both granularity parameters: ε and γ. Discussed
approaches of ε and/or γ optimization take full benefit
of granular knowledge representation model.
The objective of the research discussed in this
article is to propose appropriate methodologies for
Granular Cognitive Maps training and exploration.
In future research authors plan to investigate other
knowledge granules representation models, most im-
portantly fuzzy numbers and bipolar knowledge rep-
resentation schemes.
GranularCognitiveMapReconstruction-AdjustingGranularityParameters
183
Figure 7: Weak (top row) and strict (bottom row) coverage on not distorted train dataset (first column), distorted train dataset
(second column) and test dataset (third column) before and after simultaneous adjustment of ε and γ.
ACKNOWLEDGEMENTS
The research is partially supported by the National
Science Center, grant No 2011/01/B/ST6/06478, de-
cision no DEC-2011/01/B/ST6/06478.
REFERENCES
Bargiela, A. and Pedrycz, W. (2003). Granular Computing:
An Introduction. Kluwer Academic Publishers.
Kennedy, J. and Eberhart, R. (1995). Particle swarm opti-
mization. In Proceedings of IEEE International Con-
ference on Neural Networks IV.
Kosko, B. (1986). Fuzzy cognitive maps. In Int. J. Man
Machine Studies 7.
Papageorgiou, E. I. and Salmeron, J. L. (2013). A review of
fuzzy cognitive maps research during the last decade.
In IEEE Trans on Fuzzy Systems, 21.
Papakostas, G., Koulouriotis, D., and Tourassis, A. P. V.
(2012). Towards hebbian learning of fuzzy cognitive
maps in pattern classification problems. In Expert Sys-
tems with Applications 39.
Papakostas, G. A., Boutalis, Y. S., Koulouriotis, D. E., and
Mertzios, B. G. (2008). Fuzzy cognitive maps for pat-
tern recognition applications. In International Journal
of Pattern Recognition and Artificial Intelligence, Vol.
22, No. 8,.
Pedrycz, W. and Homenda, W. (2012). From fuzzy cog-
nitive maps to granular cognitive maps. In Proc. of
ICCCI, LNCS 7653.
Shi, Y. and Eberhart, R. (1998). A modified particle swarm
optimizer. In Proceedings of IEEE International Con-
ference on Evolutionary Computation.
Stach, W., Kurgan, L., Pedrycz, W., and Reformat, M.
(2004). Learning fuzzy cognitive maps with required
precision using genetic algorithm approach. In Elec-
tronics Letters, 40.
Stach, W., Kurgan, L., Pedrycz, W., and Reformat, M.
(2005). Genetic learning of fuzzy cognitive maps. In
Fuzzy Sets and Systems, 153.
Zadeh, L. (1997). Towards a theory of fuzzy information
granulation and its centrality in human reasoning and
fuzzy logic. In Fuzzy Sets and Systems, 90.
ICEIS2014-16thInternationalConferenceonEnterpriseInformationSystems
184