Dev(Res)=])yy()yy()yy[(
2R*
i
R
i
2L*
i
L
i
2*
ii
∑
−+−+−
represents the residual deviance,
Dev(Regr)=])yy()yy()yy[(
2R*R*
i
2**
i
2L*L*
i
∑
−Σ+−Σ+−
represents the regression deviance and
])yy()yy()yy[(
2RR*2*2LL*
−+−+−
=
2*
)Y,Y(d
represents the distance between theoretical and
empirical average values of Y.
Synthetically the expression (6) can be written as:
Dev(Tot) η+++=
2*
)Y,Y(nd)gr(ReDev)s(ReDev
where:
)].yy)(yy()yy)(yy()yy)(yy([2
)]yy)(yy()yy)(yy()yy)(yy[(2
RR*
i
R*
i
R
i
*
i
*
ii
LL*
i
L*
i
L
i
RR*R*R*
i
***
i
LL*L*L*
i
−−+−−+−−+
+−−+−−+−−=η
∑
∑
As the sums of deviations of each component
from its average equal zero, then it is
[]
=−−+−−+−−
∑
)yy)(yy()yy)(yy()yy)(yy(2
RR*R*R*
i
***
i
LL*L*L*
i
0)]yy()yy()yy()yy()yy()yy[(2
R*R*
i
RR***
i
*L*L*
i
LL*
=−−+−−+−−=
∑∑∑
and the amount η is reduced to
=−−+
+−−+−−=η
∑
∑∑
)yy)(yy(2
)yy)(yy(2)yy)(yy(2
RR*
i
R*
i
R
i
*
i
*
ii
LL*
i
L*
i
L
i
.y)yy(2y)yy(2y)yy(2
y)yy(2y)yy(2y)yy(2
RR*
i
R
i
R*
i
R*
i
R
i
*
ii
*
i
*
ii
LL*
i
L
i
L*
i
L*
i
L
i
∑∑∑
∑∑∑
−−−+−−
+−+−−−=
(7)
Moreover, as it is
L*
i
y =
R
i
L
i
czbxa ++ ,
*
i
y =
ii
czbxa ++ ,
R*
i
y =
L
i
R
i
czbxa ++
it is also
0y)yy(2y)yy(2y)yy(2
R*
i
R*
i
R
i
*
i
*
ii
L*
i
L*
i
L
i
=−+−+−
∑∑∑
.
By replacing expressions of the theoretical values in
the latter equation, we obtain
=++−+
+++−+++−=
∑
∑∑
])czbxa)(yy(
)czbxa)(yy()czbxa)(yy([2
L
i
R
i
R*
i
R
i
ii
*
ii
R
i
L
i
L*
i
L
i
)]}zyzyzy()zyzyzy[(c
)]xyxyxy()xyxyxy[(b
)]yyy()yyy[(a{2
L
i
R*
ii
*
i
R
i
L*
i
L
i
R
iii
R
i
L
i
R
i
R*
ii
*
i
L
i
L*
i
R
i
R
iii
L
i
L
i
R*
i
*
i
L*
i
R
ii
L
i
Σ+Σ+Σ−Σ+Σ+Σ+
+Σ+Σ+Σ−Σ+Σ+Σ+
+Σ+Σ+Σ−Σ+Σ+Σ=
where
()( )
0yyyyyy
R*
i
*
i
L*
i
R
ii
L
i
=Σ+Σ+Σ−Σ+Σ+Σ
for (3),
()
)
0xyxyxyxyxyxy
R
i
R*
ii
*
i
L
i
L*
i
R
i
R
iii
L
i
L
i
=Σ+Σ+Σ−Σ+Σ+Σ
for (4),
()
)
0zyzyzyzyzyzy
L
i
R*
ii
*
i
R
i
L*
i
L
i
R
iii
R
i
L
i
=Σ+Σ+Σ−Σ+Σ+Σ
for (5).
Finally the expression (7) can be reduced to
=−−−−−−=η
∑∑∑
RR*
i
R
i
*
ii
LL*
i
L
i
y)yy(2y)yy(2y)yy(2
)eyeyey(2
R
i
R
i
L
i
L
Σ+Σ+Σ− .
Note that, if the residual deviance equals zero, also
η and
2*
)Y,Y(d equal zero, because theoretical and
empirical average values of Y coincide for each
observation.
Therefore:
- if the regression deviance equals zero, then the
model has no forecasting ability, because the sum of
the components of the i-th estimated fuzzy value
equal the sum of the sample average components (i
= 1 ,..., n). Actually, if Dev (regr) = 0, for each i we
have
∑∑∑∑∑∑
++=++
R
ii
L
i
R*
i
*
i
L*
i
yyyyyy =>
=>
RLR*
i
*
i
L*
i
ynynynnynyny ++=++ =>
=>
RLR*
i
*
i
L*
i
yyyyyy ++=++ ;
- if the residual deviance equals zero, the
relationship between dependent variable and
independent ones is well represented by the
estimated model. In this case, the total deviance is
entirely explained by the regression deviance.
As usual, the largest the regression deviance is
(the smallest the residual deviance is), the better the
model fits data.
5 CONCLUSIONS
In this work, starting from a multivariate
generalization of the Fuzzy Least Square
Regression, we have decomposed the total deviance
of the dependent variable according to the metric
proposed by Diamond (1988). In particular we have
obtained the expression of two additional
components of variability, besides the regression
deviance and the residual one, which arise from the
inequality between theoretical and empirical values
of the average fuzzy dependent variable (unlike in
the OLS estimation procedure for crisp variables).
REFERENCES
Campobasso, F., Fanizzi, A., Tarantini, M., 2008, Fuzzy
Least Square Regression,
Annals of Department of
Statistical Sciences, University of Bari, Italy, 229-243.
Diamond, P. M., 1988. Fuzzy Least Square, Information
Sciences
, 46:141-157.
Kao, C.,Chyu, C.L., 2003, Least-squares estimates in
fuzzy regression analysis,
European Journal of
Operational Research
, 148:426-435.
Takemura, K., 2005. Fuzzy least squares regression
analysis for social judgment study,
Journal of
Advanced Intelligent Computing and Intelligent
Informatics, 9(5), 461:466.
IJCCI 2009 - International Joint Conference on Computational Intelligence
78