Self-adaptive Topology Neural Network for Online Incremental Learning

Beatriz Pérez-Sánchez, Oscar Fontenla-Romero, Bertha Guijarro-Berdiñas

Abstract

Many real problems in machine learning are of a dynamic nature. In those cases, the model used for the learning process should work in real time and have the ability to act and react by itself, adjusting its controlling parameters, even its structures, depending on the requirements of the process. In a previous work, the authors proposed an online learning method for two-layer feedforward neural networks that presents two main characteristics. Firstly, it is effective in dynamic environments as well as in stationary contexts. Secondly, it allows incorporating new hidden neurons during learning without losing the knowledge already acquired. In this paper, we extended this previous algorithm including a mechanism to automatically adapt the network topology in accordance with the needs of the learning process. This automatic estimation technique is based on the Vapnik-Chervonenkis dimension. The theoretical basis for the method is given and its performance is illustrated by means of its application to distint system identification problems. The results confirm that the proposed method is able to check whether new hidden units should be added depending on the requirements of the online learning process.

References

  1. Ash, T. (1989). Dynamic node creation in backpropagation networks. Connection Science, 1(4):365-375.
  2. Aylward, S. and Anderson, R. (1991). An algorithm for neural network architecture generation. In AIAA Computing in Aerospace Conference VIII.
  3. Baum, E. B. and Haussler, D. (1989). What size net gives valid generalization? Neural Computation, 1(1):151- 160.
  4. Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press, New York.
  5. Fiesler, E. (1994). Comparative Bibliography of Ontogenic Neural Networks. In Proccedings of the International Conference on Artificial Neural Networks (ICANN 1994), pages 793-796.
  6. Fontenla-Romero, O., Guijarro-Berdin˜as, B., PérezSánchez, B., and Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5):1984-1992.
  7. Gama, J., Medas, P., Castillo, G., and Rodrigues, P. (2004). Learning with drift detection. Intelligent Data Analysis, 8:213-237.
  8. Hénon, M. (1976). A two-dimensional mapping with a strange attractor. Communications in Mathematical Physics, 50(1):69-77.
  9. Islam, M., Sattar, A., Amin, F., Yao, X., and Murase, K. (2009). A new adaptive merging and growing algorithm for designing artificial neural networks. IEEE Transactions on Neural Networks, 20:1352-1357.
  10. Kwok, T.-Y. and Yeung, D.-Y. (1997). Constructive Algorithms for Structure Learning in FeedForward Neural Networks for Regression Problems. IEEE Transactions on Neural Networks, 8(3):630-645.
  11. Ma, L. and Khorasani, K. (2003). A new strategy for adaptively constructing multilayer feedforward neural networks. Neurocomputing, 51:361-385.
  12. Mackey, M. and Glass, L. (1977). Oscillation and chaos in physiological control sytems. Science, 197(4300):287-289.
  13. Martínez-Rego, D., Pérez-Sánchez, B., Fontenla-Romero, O., and Alonso-Betanzos, A. (2011). A robust incremental learning method for non-stationary environments. NeuroComputing, 74(11):1800-1808.
  14. Murata, N. (1994). Network Information CriterionDetermining the number of hidden units for an Artificial Neural Network Model. IEEE Transactions on Neural Networks, 5(6):865-872.
  15. Nguyen, D. and Widrow, B. (1990). Improving the learning speed of 2-layer neural networks choosing initial values of the adaptive weights. In Proccedings of the International Joint Conference on Neural Networks, (IJCNN 1990), volume 3, pages 21-26.
  16. Parekh, R., Yang, J., and Honavar, V. (2000). Constructive Neural-Network Learning Algorithms for Pattern Classification.
  17. Pérez-Sánchez, B., Fontenla-Romero, O., GuijarroBerdin˜as, B., and Martínez-Rego, D. (2013). An online learning algorithm for adaptable topologies of neural networks. Expert Systems with Applications, 40(18):7294-7304.
  18. Reed, R. (1993). Pruning Algorithms: A Survey. IEEE Transactions on Neural Networks, 4:740-747.
  19. Sharma, S. K. and Chandra, P. (2010). Constructive neural networks: A review. International Journal of Engineering Science and Technology, 2(12):7847-7855.
  20. Vapnik, V. (1998). Statistical Learning Theory. John Wiley & Sons, Inc. New York.
  21. Yao, X. (1999). Evolving Artificial Neural Networks. In Proceedings of the IEEE, volume 87, pages 1423- 1447.
Download


Paper Citation


in Harvard Style

Pérez-Sánchez B., Fontenla-Romero O. and Guijarro-Berdiñas B. (2014). Self-adaptive Topology Neural Network for Online Incremental Learning . In Proceedings of the 6th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, ISBN 978-989-758-015-4, pages 94-101. DOI: 10.5220/0004811500940101


in Bibtex Style

@conference{icaart14,
author={Beatriz Pérez-Sánchez and Oscar Fontenla-Romero and Bertha Guijarro-Berdiñas},
title={Self-adaptive Topology Neural Network for Online Incremental Learning},
booktitle={Proceedings of the 6th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,},
year={2014},
pages={94-101},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004811500940101},
isbn={978-989-758-015-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 6th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,
TI - Self-adaptive Topology Neural Network for Online Incremental Learning
SN - 978-989-758-015-4
AU - Pérez-Sánchez B.
AU - Fontenla-Romero O.
AU - Guijarro-Berdiñas B.
PY - 2014
SP - 94
EP - 101
DO - 10.5220/0004811500940101