Enlarging Training Sets for Neural Networks

R. Gil-Pita, P. Jarabo-Amores, M. Rosa-Zurera, F. López-Ferreras

2004

Abstract

A study is presented to compare the performance of multilayer perceptrons, radial basis functionm networks, and probabilistic neural networks for classification. In many classification problems, probabilistic neural networks have outperformed other neural classifiers. Unfortunately, with this kind of networks, the number of required operations to classify one pattern directly depends on the number of training patterns. This excessive computational cost makes this method difficult to be implemented in many real time applications. On the contrary, multilayer perceptrons have a reduced computational cost after training, but the required training set size to achieve low error rates is generally high. In this paper we propose an alternative method for training multilayer perceptrons, using data knowledge derived from the probabilistic neural network theory. Once the probability density functions have been estimated by the probabilistic neural network, a new training set can be generated using these estimated probability density functions. Results demonstrate that a multilayer perceptron trained with this enlarged training set achieves results equally good than those obtained with a probabilistic neural network, but with a lower computational cost.

References

  1. Rosenblatt, F. : Principles of Neurodynamics. New York: Spartan books (1962).
  2. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, vol. 2, pp. 303-314, 1989.
  3. Hagan, M.T., Menhaj, M.B.: Training Feedforward Networks with the Marquardt Algorithm. IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 989-993, November 1994.
  4. Haykin, S.: Neural networks. A comprehensive foundation (second edition). Upper Saddle River, New Jersey: Prentice-Hall Inc. (1999)
  5. Bishop, C.M.: Neural networks for pattern recognition. New York: Oxford University Press Inc. (1995).
  6. Schwenker, F., Kestler, H.A., Palm, G.: Three learning phases for radial-basis-function networks. Neural Networks, Vol. 14, Issue 4-5, pp. 439-458, May 2001.
  7. Specht, D.F.: Probabilistic Neural Networks. Neural Networks, vol. 3, pp. 110-118, 1990.
  8. Abu-Mostafa, Y.S.: Hints. Neural Computation, vol. 7, pp. 699-671, July 1995.
  9. Niyogi, P., Girosi, F., Poggio, T.: Incorporating Prior Information in Machine Learning by Creating Virtual Examples. Proceedings of the IEEE, vol. 86, no. 11, pp. 2196-2209, November 1998.
  10. R. Gil-Pita, P. Jarabo-Amores, R. Vicen-Bueno, and M. Rosa-Zurera, "Neural Solution for High Range Resolution Radar Classi cation", Lecture Notes in Computer Science, vol. 2687, June, 2003.
Download


Paper Citation


in Harvard Style

Gil-Pita R., Jarabo-Amores P., Rosa-Zurera M. and López-Ferreras F. (2004). Enlarging Training Sets for Neural Networks . In Proceedings of the First International Workshop on Artificial Neural Networks: Data Preparation Techniques and Application Development - Volume 1: ANNs, (ICINCO 2004) ISBN 972-8865-14-7, pages 23-31. DOI: 10.5220/0001149800230031


in Bibtex Style

@conference{anns04,
author={R. Gil-Pita and P. Jarabo-Amores and M. Rosa-Zurera and F. López-Ferreras},
title={Enlarging Training Sets for Neural Networks},
booktitle={Proceedings of the First International Workshop on Artificial Neural Networks: Data Preparation Techniques and Application Development - Volume 1: ANNs, (ICINCO 2004)},
year={2004},
pages={23-31},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0001149800230031},
isbn={972-8865-14-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the First International Workshop on Artificial Neural Networks: Data Preparation Techniques and Application Development - Volume 1: ANNs, (ICINCO 2004)
TI - Enlarging Training Sets for Neural Networks
SN - 972-8865-14-7
AU - Gil-Pita R.
AU - Jarabo-Amores P.
AU - Rosa-Zurera M.
AU - López-Ferreras F.
PY - 2004
SP - 23
EP - 31
DO - 10.5220/0001149800230031