
6 Conclussions
In this paper, a study is presented to compare the performance of three kinds of neural
networks: MLPs, RBFNs and PNNs. We also present a new strategy that combines the
characteristics of MLPs and PNNs. This strategy uses the estimates of the PDFs of the
classes, obtained from the actually available training data, to generate synthetic patterns
and, therefore, an enlarged training set. These new set is used to train an MLP-based
classifier.
Both the MLP-based classifier and the RBFN-based classifier are a compromise be-
tween computational complexity and error rates. As the error rate decreases, the com-
putational complexity increases, and vice versa. The curves represented in figures 1
and 2 for the MLP-based classifier and the RBFN-based classifier could be seen as
segments of an overall curve that shows the relationship between error rate and com-
putational complexity. This curve shows an indirect proportionality between accuracy
and computational costs until we reach the point of the minimum error rate. From this
point on, increasing the computational effort does not yield better results in terms of
error rate. The part of this curve with low computational complexity corresponds to
the MLP-based classifier. The part with low error rates corresponds to the RBFN-based
classifier.
The performance of the MLP trained with synthetic samples generated from the
estimated PDFs of the respective classes (PNN+MLP) significantly surpasses the results
obtained with the RBFN-based classifiers, and MLP-based classifiers. Better still, this
improvement is achieved in both areas, error rate and computational complexity.
Comparing with the PNN, the proposed method equals the performance of the PNN,
but with a dramatically reduced computational complexity. These gains represent an
important increase in the efficiency.
In summary, we can conclude that the proposed method for increasing the size of
the training set in order to achieve better training of neural networks is very beneficial.
The results confirm that a MLP, trained with a synthetically enlarged training set can
generalize well on actual data, making this strategy useful when only very small data
sets are available.
References
1. Rosenblatt, F. : Principles of Neurodynamics. New York: Spartan books (1962).
2. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of Con-
trol, Signals and Systems, vol. 2, pp. 303-314, 1989.
3. Hagan, M.T., Menhaj, M.B.: Training Feedforward Networks with the Marquardt Algorithm.
IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 989-993, November 1994.
4. Haykin, S.: Neural networks. A comprehensive foundation (second edition). Upper Saddle
River, New Jersey: Prentice-Hall Inc. (1999)
5. Bishop, C.M.: Neural networks for pattern recognition. New York: Oxford University Press
Inc. (1995).
6. Schwenker, F., Kestler, H.A., Palm, G.: Three learning phases for radial-basis-function net-
works. Neural Networks, Vol. 14, Issue 4-5, pp. 439-458, May 2001.
7. Specht, D.F.: Probabilistic Neural Networks. Neural Networks, vol. 3, pp. 110-118, 1990.
30