size and learning quality. It is traditionally thought
that the more exquisitely the input space is divided,
the more accurately the output results of CMAC can
be obtained. However, this will certainly cause
quickly increasing of memory size, if we do not
develop more complex CMAC structure, since the
simplicity of structure play an important role in the
on-line application of neural network. In fact, by
experimental study of approximation examples, in
which several high-dimension functions were
selected and several combinations of structural
parameters were tested, we found that the learning
preciseness and the required memory size are
determined by both of the quantization step and
number of layers. Thus, adaptive choice of these
structural parameters may overcome the above
primary limitation. Our goal is that a CMAC
structure can be optimized automatically for a given
problem. In this way, it is possible to decrease the
memory size according to the desired performance
of the CMAC neural network.
The paper is organized as follows: In section 2,
CMAC model and its structure parameters are
concisely overviewed. Section 3 presents the
experimental study of the influence of structural
parameters on the memory size and approximation
quality. In section 4, a Q-learning based structure
optimized approach is developed. The proposed
approach is applied on the desired joint angel
tracking for biped robot in section 5. Conclusion and
further works are finally set out.
2 CMAC NN STRUCTURE AND
STRUCTURAL PARAMETERS
The output
Y
of the CMAC NN is computed using
two mappings. The first mapping
()
A projects
the input space point
12
[, , , ]
n
xx x into a binary
associative vector
12
[, , , ]
C
N
aa a
. Each element
of
is associated with one detector. When one
detector is activated, the corresponding element in
equals to 1, otherwise it equals to 0. The second
mapping
()
Y computes the output Y as a scalar
product of the association vector
and weight
vector
12
[, ,, ]
C
N
Www w
according to relation (1),
where
()
T
represents the transpose of the input
vector.
()
T
YAXW
(1)
The weights of CMAC neural network are
updated by using equation (2).
()
i
wt and
1
()
i
wt
are
respectively the weights before and after training at
each sample time
i
t .
l
N is the generalization number
of each CMAC and
is a parameter included
in
[0 1] . e
is the error between the desired
output
d
Y of the CMAC and the computed
output
Y of the corresponding CMAC.
1
() ( )
ii
l
e
Wt Wt
N
(2)
Due to its structure, CMAC is preferable to be used
to approximate both linear and non-linear functions.
If the complexity of its structure is not increased
additionally, there are essentially two structural
factors ruling the function approximation quality.
The first one, called “quantization step”
q , allows
to map a continuous signal into a discrete signal.
The second parameter, called “generalization
parameter”
l
N , corresponds to the number of layers.
These two parameters allow to define the total
number of cells
C
N .
3 IMPACT OF STRUCTURAL
PARAMETERS ON CMAC NN
We try to show the relation between the structural
parameters of CMAC neural network, the quality of
the approximation and the required memory size for
a given function. Our study is based on an
experimental enquiry, in which several high-
dimension functions are used in order to test the
neural network’s approximation abilities. In this
section, take FSIN and two dimension GUASS
functions as examples, simulations for several step
quantization
q
are carried out, when the number of
layers increases from 5 to 50 for FSIN function, and
from 5 to 450 for two dimension GAUSS function.
For each of the aforementioned functions, a training
set including
100 100
random values, selected in
the corresponding two-dimensional space, has been
constructed. Weights of CMAC are updated using
equation (2). When CMAC is totally trained, three
modeling errors: mean absolute error
mean
E , mean
squared error
quare
E
and maximum absolute
error
max
E are carried out. The overview of the
obtained results for only three step quantization is
shown in Figure 1 and 2 respectively.
NCTA 2011 - International Conference on Neural Computation Theory and Applications
284