Evolving Gradient a New Approach to Perform Neural Network Training

César Daltoé Berci, Celso Pascoli Bottura

2009

Abstract

The use of genetic algorithms in ANNs training is not a new subject, several works have already accomplished good results, however not competitive with procedural methods for problems where the gradient of the error is well defined. The present document proposes an alternative for ANNs training using GA(Genetic Algorithms) to evolve the training process itself and not to evolve directly the network parameters. This way we get quite superior results and obtain a method competitive with these, usually used to training ANNs.

References

  1. R. Battiti and F. Masulli. Bfgs optimization for faster and automated supervised learning. INNC 90 Paris, International Neural Network Conference,, pages 757-760, 1990.
  2. Csar Dalto Berci. Observadores Inteligentes de Estado: Propostas. Tese de Mestrado, LCSI/FEEC/UNICAMP, Campinas, Brasil, 2008.
  3. Csar Dalto Berci and Celso Pascoli Bottura. Observador inteligente adaptativo neural no baseado em modelo para sistemas no lineares. Proceedings of 7th Brazilian Conference on Dynamics, Control and Applications. Presidente Prudente, Brasil, 7:209-215, 2008.
  4. Jürgen Branke. Evolutionary algorithms for neural network design and training. In 1st Nordic Workshop on Genetic Algorithms and its Applications, 1995. Vaasa, Finland, January 1995.
  5. D.J. Chalmers. The evolution of learning: An experiment in genetic connectionism. Proceedings of the 1990 Connectionist Summer School, pages 81-90, 1990.
  6. Charles Darwin. On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. John Murray, London, 1859.
  7. D.C. Dennett. Darwin's Dangerous Idea: Evolution and the Meanings of Life. Penguim Books, 1995.
  8. A. Fiszelew, P. Britos, A. Ochoa, H. Merlino, E. Fernndez, and R. Garca-Martnez. Finding optimal neural network architecture using genetic algorithms. Software & Knowledge Engineering Center. Buenos Aires Institute of Technology. Intelligent Systems Laboratory. School of Engineering. University of Buenos Aires., 2004.
  9. C. Fyfe. Artificial Neural Networks. Department of Computing and Information Systems, The University of Paisley, Edition 1.1, 1996.
  10. D.G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley, 2nd edition, 1984.
  11. M.F. Mller. Learning by conjugate gradients. The 6th International Meeting of Young Computer Scientists, 1990.
  12. M.F. Mller. A scaled conjugate gradient algorithm for fast supervised learning. Computer Science Department, University of Aarhus Denmark, 6:525-533, 1990.
  13. D. Montana and L. Davis. Training feedforward neural networks using genetic algorithms. Proceedings of the International Joint Conference on Artificial Intelligence, pages 762-767, 1989.
  14. D. E. Rumelhart, R. Durbin, R. Golden, and Y. Chauvin. Backpropagation: The basic theory. Lawrence Erlbaum Associates, Inc., 1995.
  15. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation, in: Parallel distributed processing: Exploration in the microstructure of cognition. Eds. D.E. Rumelhart, J.L. McClelland, MIT Press, Cambridge, MA., pages 318- 362, 1986.
  16. Udo Seiffert. Multiple layer perceptron training using genetic algorithms. ESANN'2001 proceedings - European Symposium on Artificial Neural Networks, pages 159-164, 2001.
  17. Zhi-Hua Zhou, Jian-Xin Wu, Yuan Jiang, and Shi-Fu Chen. Genetic algorithm based selective neural network ensemble. Proceedings of the 17th International Joint Conference on Artificial Intelligence., 2:797-802, 2001.
Download


Paper Citation


in Harvard Style

Berci C. and Bottura C. (2009). Evolving Gradient a New Approach to Perform Neural Network Training . In Proceedings of the 5th International Workshop on Artificial Neural Networks and Intelligent Information Processing - Volume 1: Workshop ANNIIP, (ICINCO 2009) ISBN 978-989-674-002-3, pages 3-12. DOI: 10.5220/0002214000030012


in Bibtex Style

@conference{workshop anniip09,
author={César Daltoé Berci and Celso Pascoli Bottura},
title={Evolving Gradient a New Approach to Perform Neural Network Training},
booktitle={Proceedings of the 5th International Workshop on Artificial Neural Networks and Intelligent Information Processing - Volume 1: Workshop ANNIIP, (ICINCO 2009)},
year={2009},
pages={3-12},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002214000030012},
isbn={978-989-674-002-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 5th International Workshop on Artificial Neural Networks and Intelligent Information Processing - Volume 1: Workshop ANNIIP, (ICINCO 2009)
TI - Evolving Gradient a New Approach to Perform Neural Network Training
SN - 978-989-674-002-3
AU - Berci C.
AU - Bottura C.
PY - 2009
SP - 3
EP - 12
DO - 10.5220/0002214000030012