BFGS (Unconstrained)
BFGS (Constrained) 0.3118
BFGS
(Unconstr.)
0.4271
LM 0.3251 0.3251
BFGS
(Constrained)
BFGS
(Unconstr.)
As shown in Tables 5-6, the statistical t-test has
concluded that there is no statistical difference in the
non-linear programming algorithms. However,
tables 3-4 expose that the results using an algorithm
may provide better results. What we recommend to
solve a problem, is to make a set of experiments
with each non-linear programming algorithm, and
then choose the one that better results provide, in
average.
5 CONCLUSIONS
In this work, we have introduced some non-linear
programming algorithms to train Recurrent Neural
Networks: the BFGS and the LM algorithms. After
considering the training of an Elman Recurrent
Neural Network as a non-linear programming
problem, the models have been applied to some
Time Series prediction problems in the experimental
section, obtaining suitable results. The non-linear
programming algorithms have improved the
solutions provided by the traditional training
algorithm for ERNN. They also have obtained better
results than other recent techniques, such Genetic
Algorithms, and those solutions have been reached
in less time than the GA and the traditional
algorithms. In addition, it also may be used when
bound constraints are a requirement over the
network weights, meanwhile this situation cannot be
solved using traditional training algorithms. In
conclusion, the use of non-linear programming
techniques may be a good tool to be considered
when training Recurrent Neural Networks.
REFERENCES
Blanco, Delgado, Pegalajar. 2001. A Real-Coded genetic
algorithm for training recurrent neural networks.
Neural Networks, vol. 14, pp. 93-105.
C. Zhu, R. H. Byrd and J. Nocedal. 1997. L-BFGS-B:
Algorithm 778: L-BFGS-B, FORTRAN routines for
large scale bound constrained optimization, ACM
Transactions on Mathematical Software, Vol 23, Num.
4, pp. 550 - 560.
Cuéllar M.P., Delgado M., Pegalajar M.C.. 2004. A
Comparative study of Evolutionary Algorithms for
Training Elman Recurrent Neural Networks to predict
the Autonomous Indebtedness. in Proc. ICEIS, Porto,
Portugal, pp. 457-461.
Danilo P. Mandic, Jonathon A. Chambers. 2001.
Recurrent Neural Networks for Prediction. Wiley,
John & Sons, Incorporated.
D. W. Marquardt. 1963. An algorithm for least-squares
estimation of nonlinear parameters, Journal of the
Society for Industrialand Applied Mathematics, pp.
11431–441.
Martin T. Hagan, Mohammed B. Menhaj. 1994. Training
FeedForward networks with the Marquardt algorithm,
IEEE transactions on Neural networks, vol 5, no. 6,
pp. 989-993.
Michael Hüsken, Peter Stagge. 2003. Recurrent Neural
Networks for Time Series classification,
Neurocomputing, vol. 50, pp. 223-235.
More, J. J. 1977. The Levenberg-Marquardt algorithm:
Implementation and theory. Lecture notes in
mathematics, Edited by G. A. Watson,
SpringerVerlag.
R. H. Byrd, P. Lu and J. Nocedal. 1995. A Limited
Memory Algorithm for Bound Constrained
Optimization, SIAM Journal on Scientific and
Statistical Computing , 16, 5, pp. 1190-1208.
R. Martí, A. El-Fallahi. 2002. Multilayer Neural
Networks: An experimental evaluation of on-line
training methods. Computers and Operations Research
31, pp. 1491-1513.
Ryad Zemomi, Daniel Racaceanu, Nouredalime Zerhonn.
2003. Recurrent Radial Basis fuction network for
Time Seties prediction, Engineering appl. Of Artificial
Intelligence, vol. 16, no. 5-6, pp. 453-463.
Simon Haykin. 1999. Neural Networks (a Comprehensive
foundation). Second Edition. Prentice Hall.
Williams R.J., Peng J. 1990. An efficient Gradient-Based
Algorithm for On-Line Training of Recurrent Network
trajectories,” Neural Computation, vol. 2, pp. 491-501.
Williams R.J., Zipser D. 1989. A learning algorithm for
continually running fully recurrent neural networks,
Neural Computation, vol. 1, pp. 270-280.
Test for the algorithms, in Series1
Test for the algorithms, in Series2
ICEIS 2005 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
42