Self-adaptive Topology Neural Network for Online Incremental Learning
Beatriz Pérez-Sánchez, Oscar Fontenla-Romero, Bertha Guijarro-Berdiñas
2014
Abstract
Many real problems in machine learning are of a dynamic nature. In those cases, the model used for the learning process should work in real time and have the ability to act and react by itself, adjusting its controlling parameters, even its structures, depending on the requirements of the process. In a previous work, the authors proposed an online learning method for two-layer feedforward neural networks that presents two main characteristics. Firstly, it is effective in dynamic environments as well as in stationary contexts. Secondly, it allows incorporating new hidden neurons during learning without losing the knowledge already acquired. In this paper, we extended this previous algorithm including a mechanism to automatically adapt the network topology in accordance with the needs of the learning process. This automatic estimation technique is based on the Vapnik-Chervonenkis dimension. The theoretical basis for the method is given and its performance is illustrated by means of its application to distint system identification problems. The results confirm that the proposed method is able to check whether new hidden units should be added depending on the requirements of the online learning process.
References
- Ash, T. (1989). Dynamic node creation in backpropagation networks. Connection Science, 1(4):365-375.
- Aylward, S. and Anderson, R. (1991). An algorithm for neural network architecture generation. In AIAA Computing in Aerospace Conference VIII.
- Baum, E. B. and Haussler, D. (1989). What size net gives valid generalization? Neural Computation, 1(1):151- 160.
- Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press, New York.
- Fiesler, E. (1994). Comparative Bibliography of Ontogenic Neural Networks. In Proccedings of the International Conference on Artificial Neural Networks (ICANN 1994), pages 793-796.
- Fontenla-Romero, O., Guijarro-Berdin˜as, B., PérezSánchez, B., and Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5):1984-1992.
- Gama, J., Medas, P., Castillo, G., and Rodrigues, P. (2004). Learning with drift detection. Intelligent Data Analysis, 8:213-237.
- Hénon, M. (1976). A two-dimensional mapping with a strange attractor. Communications in Mathematical Physics, 50(1):69-77.
- Islam, M., Sattar, A., Amin, F., Yao, X., and Murase, K. (2009). A new adaptive merging and growing algorithm for designing artificial neural networks. IEEE Transactions on Neural Networks, 20:1352-1357.
- Kwok, T.-Y. and Yeung, D.-Y. (1997). Constructive Algorithms for Structure Learning in FeedForward Neural Networks for Regression Problems. IEEE Transactions on Neural Networks, 8(3):630-645.
- Ma, L. and Khorasani, K. (2003). A new strategy for adaptively constructing multilayer feedforward neural networks. Neurocomputing, 51:361-385.
- Mackey, M. and Glass, L. (1977). Oscillation and chaos in physiological control sytems. Science, 197(4300):287-289.
- Martínez-Rego, D., Pérez-Sánchez, B., Fontenla-Romero, O., and Alonso-Betanzos, A. (2011). A robust incremental learning method for non-stationary environments. NeuroComputing, 74(11):1800-1808.
- Murata, N. (1994). Network Information CriterionDetermining the number of hidden units for an Artificial Neural Network Model. IEEE Transactions on Neural Networks, 5(6):865-872.
- Nguyen, D. and Widrow, B. (1990). Improving the learning speed of 2-layer neural networks choosing initial values of the adaptive weights. In Proccedings of the International Joint Conference on Neural Networks, (IJCNN 1990), volume 3, pages 21-26.
- Parekh, R., Yang, J., and Honavar, V. (2000). Constructive Neural-Network Learning Algorithms for Pattern Classification.
- Pérez-Sánchez, B., Fontenla-Romero, O., GuijarroBerdin˜as, B., and Martínez-Rego, D. (2013). An online learning algorithm for adaptable topologies of neural networks. Expert Systems with Applications, 40(18):7294-7304.
- Reed, R. (1993). Pruning Algorithms: A Survey. IEEE Transactions on Neural Networks, 4:740-747.
- Sharma, S. K. and Chandra, P. (2010). Constructive neural networks: A review. International Journal of Engineering Science and Technology, 2(12):7847-7855.
- Vapnik, V. (1998). Statistical Learning Theory. John Wiley & Sons, Inc. New York.
- Yao, X. (1999). Evolving Artificial Neural Networks. In Proceedings of the IEEE, volume 87, pages 1423- 1447.
Paper Citation
in Harvard Style
Pérez-Sánchez B., Fontenla-Romero O. and Guijarro-Berdiñas B. (2014). Self-adaptive Topology Neural Network for Online Incremental Learning . In Proceedings of the 6th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, ISBN 978-989-758-015-4, pages 94-101. DOI: 10.5220/0004811500940101
in Bibtex Style
@conference{icaart14,
author={Beatriz Pérez-Sánchez and Oscar Fontenla-Romero and Bertha Guijarro-Berdiñas},
title={Self-adaptive Topology Neural Network for Online Incremental Learning},
booktitle={Proceedings of the 6th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,},
year={2014},
pages={94-101},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004811500940101},
isbn={978-989-758-015-4},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 6th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,
TI - Self-adaptive Topology Neural Network for Online Incremental Learning
SN - 978-989-758-015-4
AU - Pérez-Sánchez B.
AU - Fontenla-Romero O.
AU - Guijarro-Berdiñas B.
PY - 2014
SP - 94
EP - 101
DO - 10.5220/0004811500940101