Step 2: Rule Learning
After constructing the initial fuzzy sets and the
antecedents of fuzzy rules, the fuzzy rule learn-
ing procedure is applied to the training data. At
this phase, the activation of a rule unit and the ac-
tivation of the output unit are computed for each
pattern.
Step 3: Fuzzy Sets Learning
After the construction of the fuzzy rules, a fuzzy
sets learning procedure is applied to the train-
ing data such that the membership functions are
tuned to better match the extent of the coverage of
each individual class in the training data space, as
shown in Figure 2.1.
Step 4: Fuzzy Sets Tuning by Asymmetric Lin-
guistic Hedges (modified NEFCLASS)
At this phase, a linguistic hedge can be selected to
adjust the membership functions. The objective of
this step is to increase or decrease the membership
functions using linguistic hedges. The linguistic
hedge parameter takes the type of the hedge. The
parameter can be set to NONE or one of the 11
asymmetric linguistic hedges provided. The effect
of the various settings of the parameter is the main
focus of this work.
3.2 Improving Accuracy by
Asymmetric Linguistic Hedges
NEURO-FUZZY systems stores their knowledge as lin-
guistic values between neurons of input and rule lay-
ers (Bargiela and Pedrycz, 2001, p. 276). Adding
weights to the connections between features and their
associated rules increases the influence of those fea-
tures on the decision making process. This motivates
us to use asymmetric linguistic hedges for increasing
connection weights between neurons, hence increas-
ing the membership values of the skewed features.
We argue that asymmetric hedges can be used to ex-
press the information distribution and bias member-
ship functions toward bias within data. We hypothe-
size that if the shape of a membership function resem-
bles the skewness in the data, the information distri-
bution will be similar to data distribution; it will min-
imize the effect of bias within data, thus improving
the accuracy of the classifier. In particular, we ex-
amine the treatment of positively skewed data. How-
ever, this approach can be extended and modified for
treatment of negative skewness. Our design modifica-
tion aims to improve the accuracy of NEFCLASS, uses
asymmetric linguistic hedges to tune and optimize the
membership functions. Hence, the objective of this
paper is to answer the research question as follows:
Does devising asymmetric linguistic hedges im-
prove the accuracy of the NEFCLASS classifier for
skewed datasets?
• Null Hypothesis: There will be no significant de-
crease in the misclassification percentage of the
NEFCLASS classifier after applying the asymmet-
ric hedges.
• Alternative Hypothesis: Applying asymmetric
linguistic hedges to the membership functions sig-
nificantly reduces the misclassification percentage
of NEFCLASS for skewed data.
Our asymmetric hedges apply different hedges
to each side of a membership function. The effect
of asymmetric hedges results in the skewing of a
membership function in a positive or negative direc-
tion (Bargiela and Pedrycz, 2001). Table 1 displays
the name, the mathematical operation, and the type
of the 11 asymmetric hedges that are defined for our
experiments. The name assigned to each asymmetric
hedge has been chosen to reflect the type of operation
and the amount of change that are applied to each side
of the membership functions.
Five asymmetric hedges are defined to change
the right side of a membership function, while the
left side remains unchanged. For example, the BIG-
CONCAVERIGHT hedge applies ABSOLUTELY to the
right side of the membership function, which results
in a big decrease (concavity) on the right side, while
the left side remains unchanged. The other six hedges
apply a concentration operation on one side and a
dilation operation on the other side. For example,
BIG-CONVEXLEFT-CONCAVERIGHT hedge applies
SLIGHTLY to the left side, and ABSOLUTELY to the
right side of the triangular fuzzy set, which results in
a big increase of the membership function in the left
side (convexity), and a big decrease in the right side
(concavity).
In this work we will use the terms MF
(2)
, MF
(3)
,
MF
(4)
, MF
(
1
2
)
, MF
(
1
3
)
, and MF
(
1
4
)
to denote VERY,
EXTREMELY, ABSOLUTELY, FAIRLY, SOMEWHAT,
and SLIGHTLY, respectively. Note that in tables and
figures, linguistic hedges have been replaced with
these terms for the sake of clarity of the operations
and to save space.
3.3 Synthesized Datasets
Three synthesized datasets were used for experi-
ments. The synthesized datasets were produced
by randomly generating numbers following the F-
DISTRIBUTION with different degrees of freedom
FCTA 2020 - 12th International Conference on Fuzzy Computation Theory and Applications
314