Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures
Branimir Todorović, Miomir Stanković, Claudio Moraga
2014
Abstract
The problem of recurrent neural network training is considered here as an approximate joint Bayesian estimation of the neuron outputs and unknown synaptic weights. We have implemented recursive estimators using nonlinear derivative free approximation of neural network dynamics. The computational efficiency and performances of proposed algorithms as training algorithms for different recurrent neural network architectures are compared on the problem of long term, chaotic time series prediction.
References
- Anderson, B. and J. Moore, 1979. Optimal Filtering. Englewood Cliffs, NJ, Prentice-Hall.
- Julier, S. J., and Uhlmann, J. K., A new extension of the Kalman filter to nonlinear systems. 1997, Proceedings of AeroSense: The 11th international symposium on aerospace/defence sensing, simulation and controls, Orlando, FL.
- Nørgaard, M., Poulsen, N. K., and Ravn, O., 2000, Advances in derivative free state estimation for nonlinear systems, Technical Report, IMM-REP-1998- 15, Department of Mathematical Modelling, DTU.
- Todorovic, B., Stankovic, M., and Moraga, C. 2003, Online Learning in Recurrent Neural Networks using Nonlinear Kalman Filters. In Proc. of ISSPIT 2003, Darmstadt, Germany.
- Todorovic, B., Stankovic, M., and Moraga C., 2004, Nonlinear Bayesian Estimation of Recurrent Neural Networks. In Proc. of IEEE 4th International Conference on Intelligent Systems Design and Applications ISDA 2004, Budapest, Hungary, August 26-28, pp. 855-860.
- Van der Merwe, R. and Wan, E. AS. 2001, Efficient Derivative-Free Kalman Filters for Online Learning. In Proc. of ESSAN, Bruges, Belgium.
- Williams, R. J., & Zipser, D. 1989, A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1, 270-280.
- Williams, R. J. and Zipser, D. 1990, Gradient-based learning algorithms for recurrent connectionist networks. TR NU_CCS_90-9. Boston, Northeastern University.
Paper Citation
in Harvard Style
Todorović B., Stanković M. and Moraga C. (2014). Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures . In Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014) ISBN 978-989-758-054-3, pages 76-84. DOI: 10.5220/0005081900760084
in Bibtex Style
@conference{ncta14,
author={Branimir Todorović and Miomir Stanković and Claudio Moraga},
title={Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures},
booktitle={Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014)},
year={2014},
pages={76-84},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005081900760084},
isbn={978-989-758-054-3},
}
in EndNote Style
TY - CONF
JO - Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014)
TI - Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures
SN - 978-989-758-054-3
AU - Todorović B.
AU - Stanković M.
AU - Moraga C.
PY - 2014
SP - 76
EP - 84
DO - 10.5220/0005081900760084