.
.
.
.
.
.
Inputs
Fuzzification
layer
Fuzzyrule
layer
Weight
ofrules
Defuzzificationlayer
Output
Rules
Antecedent
(IF is )x A
.
.
.
.
.
.
.
Consequent
(THENyisB)
Antecedent
weight
Consequent
weight
Grade
11
12
1N
1
iN
i
i2
i1
m (x)
A
11
1
1
11
12
1M
w
w
w
11
22
MP
a
11
a
22
a
1R+1
a
2R+2
a
P2
a
PQ
b
11
b
12
b
1M
B
B
B
m (x)
A
12
1
m (x)
A
1N
1
A
OR
RuleP
Rule2
Rule1
m (x)
A
i1
i
m (x)
A
i2
i
i
m (x)
A
iN
i
A
A
A
A
A
Y(t)
x(t)
x(t)
1
i
Figure 1: Architecture of a typical neurofuzzy system. The
number of rules is given by P =
i
N
i
, where i is the
number of inputs and N
i
is the number of antecedent fuzzy
sets for input i while M is the number of consequent fuzzy
sets. Also we have Q =
i
N
i
and R =
i−1
k=0
N
k
.
where r
ij
is the ijth rule, A
i
represents the input
fuzzy sets, B
j
is the output fuzzy set and the rule con-
fidence is c
ij
∈ [0, 1]. The value of c
ij
indicates the
degree of confidence in the relationship between A
i
and B
j
. When c
ij
is zero the rule is inactive and does
not affect the output. Otherwise, the rule is active and
contributes to the output according to the degree of
activation of the antecedent. Subsequently, the fuzzy
rules can be learned by adapting the rule confidences,
changing the strength with which a rule fires.
Once, the neurofuzzy controller has been designed
and constructed, the objective of the selected learning
algorithm is to determine the appropriate values for
the parameters of the membership functions and the
linking weights (Chen and Peng, 1999). The weights
of the antecedent and consequent require as many pa-
rameters as modifiable parameters of the membership
functions. So, it is common that instead of a weight
vector,
w, it is a weight matrix, w . For instance,
the triangular membership functions have three para-
meters that can be updated. This leads to have sev-
eral free parameters to update, slowing the learning
process. In addition, the resulted membership distrib-
ution may not be as transparent as with the designer’s
distribution. For example, in (Berenji and Khed-
kar, 1992), before learning, the membership ‘posi-
tive small’ is in the positive region of the universe
of discourse but, after learning, it is in the negative
region, losing its meaning. This can be corrected
if the system is able to correct inappropriate defini-
tions of the labels. When the neurofuzzy system has
only one modifiable weight vector (i.e., the rule confi-
dence vector), leaving the other vectors and the fuzzy
memberships fixed, the system can still describe com-
pletely the input-output mapping for a The use of
rule confidences rather than a weight vector allows
the model to be represented as a set of transparent
fuzzy rules (Brown and Harris, 1994). However, us-
ing a rule weight vector reduces considerable the stor-
age requirements and the computational cost (Harris
et al., 2002, p. 92). Nevertheless, it is possible to al-
ternate between the rule weight vector and the rule
confidence without losing any information.
The transformation from the weight vector, w
i
, to
the vector of rule confidence, c
i
, is a one-to-many
mapping. The weight vector can be converted into
confidences by measuring its grade of membership to
the various fuzzy output sets, µ
B
j
(·):
c
ij
= µ
B
j
(w
i
)
The inverse transformation, from c
i
to w
i
, is given by:
w
i
=
X
j
c
ij
y
c
j
where y
c
j
is the centre of the jth output set µ
B
j
(u)
which has bounded and symmetric membership func-
tions.
The maximum number of rule confidences, p
c
, de-
pends on the number of inputs, n, and the number of
fuzzy sets in each input, p
i
, and in the output, q:
p
c
= q
n
Y
i=1
p
i
(1)
Accordingly, if the number of rules is large be-
cause there are many inputs and/or many fuzzy sets
per input, transparency can get lost. Consequently,
it is important to keep relatively low the number of
rules, avoiding redundant ones. In addition, as the in-
put dimension increases, the requirement of resources
(data, memory, processing time, ...) increases ex-
ponentially (Bossley, 1997). Therefore, fuzzy sys-
tems suffer from the curse of dimensionality (Bell-
man, 1961). Consequently, practical fuzzy and neu-
rofuzzy systems are reduced to problems with input
dimension typically less than four. However, if some
form of model complexity reduction is applied, fuzzy
and neurofuzzy systems can be used to solve high di-
mensional problems (Harris et al., 2002) and still be
transparent.
3 FUZZY INFERENCE ENGINE
The fuzzy inference engine evaluates the control rules
stored in the rule-base. It performs four main tasks:
rule firing, strength calculation, fuzzy implication
and rule aggregation. The current fuzzy input set is