1998). The paper is organized as follows: first, it
provides an overview of RBF neural networks,
which are the most popular and widely-used
paradigms in many applications, including energy
forecasting. Second, a particular RBF architecture is
proposed to forecast the DPcg. The forecasting
accuracy and precision in capturing nonlinear
interdependencies between the load and solar
radiation of these one are illustrated and discussed.
2 RBF NEURAL NETWORKS
Actually, the systems are very complexes and the
conditioning parameters that influence system
functioning are significant. In these cases it is very
difficult to determine any sort of model for
forecasting purposes. The advantages and the
drawbacks of ANNs, leaded us to RBF neural
networks as reference tools for our approach of short
term energy balance forecasting.
The RBF network is commonly used for the
purpose of modeling uncertain and nonlinear
functions. Utilizing RBF networks or modeling
purposes could be seen as an approximation problem
in a high-dimensional space (Zemouri. 2002). A key
feature of RBF is that the output layer is nerely a
linear combination of the hidden layer signals, there
being only one hidden layer.Therefore, RBF
networks allow for a much simpler weight updating
procedure and subsequently open up greater
possibilities for stability proofs and network
robustness in that the network can be described
readily by a set of nonlinear equations
In RBF networks, determination of the number of
neurons in the hidden layer is very important
because it affects the network complexity and the
generalizing capability of the network. If the number
of the neurons in the hidden layer is insufficient, the
RBF network cannot learn the data adequately; on
the other hand, if the neuron number is too high,
poor generalization or an over learning situation may
occur (Liu, 2004). The position of the centers in the
hidden layer also affects the network performance
considerably (Simon. 2002), so determination of the
optimal locations of centers is an important task. In
the hidden layer, each neuron has an activation
function. The gaussian function, which has a spread
parameter that controls the behavior of the function,
is the most preferred activation function. The
training procedure of RBF networks also includes
the optimization of spread parameters of each
neuron. (Martinez. 2008) studied the best
approximation of Gaussian RBF neural networks
with nodes uniformly spaced. Afterwards, the
weights between the hidden layer and the output
layer must be selected appropriately. Finally, the
bias values which are added with each output are
determined in the RBF network training procedure.
In the literature, various algorithms are proposed for
training RBF networks, such as the gradient descent
(GD) algorithm (Karayiannis, 1999) and Kalman
filtering (KF) (Simon. 2002). (Ferrari. 2009) studied
the multiscale approximation problem with
hierarchical RBF neural networks. But these above
RBF methods have the same defects of the
backpropagation algorithm. They are either
instability or complicate and slow. They have
proved that the connection weight of RBF neural
networks can be obtained through various learning
algorithms; therefore the weight has certain
instability.
3 PERFORMING STLF WITH
RBF
The forecasting performances of RBF neural
networks in load forecasting, are illustrated using a
dataset with 240 data points {y(t), u(t)}, representing
the radiation [W/m2] (mean value=0.9255 and
standard deviation= 97.6705) and the DPcg [kW]
(mean value=0.8156 and standard deviation=
130.9313) , obtained from a Solar Amphitheatre
(ICOP-DEMO. 1998) and (F. Dragomir et al. 2010).
The data used are normalized before starting the
training session and de-normalized at the end of the
training.
RBF neural network, used for performing STLF,
has an input layer, one hidden layer and an output
layer. The neurons in the hidden layer contain
Gaussian transfer functions, whose outputs are
inversely proportional to the distance from the center
of the neuron (see Table 1).
Table 1: RBF parameters.
Architecture RBF
Number inputs 1
Number layers 1 hidden layer with 5 radbas neurons
1 output layer with with purelin neurons
Transfer functions gaussian - hidden layer
purelin- output layer
Performance
functions
MSE (Mean Squared Error)
MAE (Mean Absolute Error)
Initial MSE goal 0.0098
Initial spread 0.02719
For the dataset, simulations are repeated 8 times.
ICINCO 2011 - 8th International Conference on Informatics in Control, Automation and Robotics
410