equalizer adaptation. Summing up, digital
communication systems operates on time varying
dispersive channels which often employ a signaling
format in which customer data are organized in
blocks preceded by a known training sequence. The
training sequence at the beginning of each block is
used to estimate channel or train an adaptive
equalizer. Depending on the rate at which the
channel changes with time, there may not be a need
to further track the channel variations during the
customer data sequence. This paper proposes a
channel equalizer for wireless channels using Radial
Basis Function (RBF) neural networks as the
equalizer structure on a symbol by symbol decision
basis. RBFs (Mulgrew, 1996) have been used in the
area of neural networks where they are applied as a
replacement for the sigmoidal transfer function.
Such networks have three layers: the input layer, the
hidden layer with the RBF nonlinearity, and a linear
output layer, as shown in Fig. 1(Burse et al, 2010).
Due to obvious reasons, the most popular choice for
the nonlinearity is the Gaussian function. The RBF
equalizer classifies the received signal according to
the class of the center closest to the received vector
(Assaf et al, 2005). The output of the RBF equalizer
supplies an attractive alternative to the Multi-Layer
Perceptron (MLP) type of Neural Network for
channel equalization problems because the structure
of the RBF network has a close relationship to
Bayesian schemes for channel equalization and
interference exclusion problems. This paper is
divided into four sections. Section 2 does a brief
discussion of RBF artificial neural networks. Section
3 presents the application of RBF neural networks to
the equalization problem and section 4 ends the
paper by presenting conclusions.
2 RBF NEURAL NETWORKS
RBF neural networks are the second more used
architecture after feedforward neural networks.
Denoting the input (vector) as x and the output as
y(x) (scalar), the architecture of a RBF neural
network is given by
2
2
1
2
||)(||
expy(x)
i
M
i
i
cx
w
(1)
using Gaussian function as basis functions. Note
that, c
i
are called centers and is called the width.
There are M basis functions centered at c
i
, and w
i
are named weights.
RBF neural networks are very popular for function
approximation, curve fitting, time series prediction,
control and classification problems. The radial basis
function network differs from other neural networks,
showing many distinctive features. Due to their
universal approximation, more concise topology and
quicker learning speed, RBF networks have attracted
considerable attention and they have been widely
used in many science and engineering fields (Oyang
et al., 2005), (Fu et al., 2005), (Devaraj et al., 2002),
(Du et al., 2008), (Han et al., 2004). The
determination of the number of neurons in the
hidden layer in RBF networks is somewhat
important because it affects the network complexity
and the generalizing capability of the network. In
case the number of the neurons in the hidden layer is
insufficient, the RBF network cannot learn the data
adequately. On the other hand, if the number of
neurons is too high, poor generalization or an
overlearning situation may take place (Liu et al.,
2004). The position of the centers in the hidden layer
also influences the network performance
significantly (Simon, 2002), so determination of the
optimal locations of centers is an important job.
Each neuron has an activation function in the hidden
layer. The Gaussian function, which has a spread
parameter that controls the behavior of the function,
is the most preferred activation function. The
training method of RBF networks also includes the
optimization of spread parameters of each neuron.
Later on, the weights between the hidden layer and
the output layer must be selected suitably. Finally,
the bias values which are added with each output are
determined in the RBF network training procedure.
In the literature, several algorithms were proposed
for training RBF networks, such as the gradient
descent (GD) algorithm (Karayiannis, 1999) and
Extended Kalman filtering (EKF) (Simon, 2002).
Several global optimization methods have been used
for training RBF networks for different science and
engineering problems such as genetic algorithms
(GA) (Barreto et al., 2002), the particle swarm
optimization (PSO) algorithm (Liu et al., 2004), the
artificial immune system (AIS) algorithm (De Castro
et al., 2001) and the differential evolution (DE)
algorithm (Yu et al., 2006). The Artificial Bee
Colony (ABC) algorithm is a population based
evolutional optimization algorithm that can be used
to various types of problems. The ABC algorithm
has been used for training feed forward multi-layer
perceptron neural networks by using test problems
such as XOR, 3-bit parity and 4-bit encoder/decoder
problems (Karaboga et al., 2007). Due to the need of
fast convergence, EKF training was chosen for the
RBF equalizer reported in this paper, details on the
RadialBasisFunctionNeuralNetworkReceiverforWirelessChannels
659