INTERPRETING EXTREME LEARNING MACHINE AS AN APPROXIMATION TO AN INFINITE NEURAL NETWORK

Elina Parviainen, Jaakko Riihimäki, Yoan Miche, Amaury Lendasse

Abstract

Extreme Learning Machine (ELM) is a neural network architecture in which hidden layer weights are randomly chosen and output layer weights determined analytically. We interpret ELM as an approximation to a network with infinite number of hidden units. The operation of the infinite network is captured by neural network kernel (NNK). We compare ELM and NNK both as part of a kernel method and in neural network context. Insights gained from this analysis lead us to strongly recommend model selection also on the variance of ELM hidden layer weights, and not only on the number of hidden units, as is usually done with ELM. We also discuss some properties of ELM, which may have been too strongly interpreted in previous works.

References

  1. Asuncion, A. and Newman, D. (2007). UCI machine learning repository.
  2. Bartlett, P. L. (1998). The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525-536.
  3. Cho, Y. and Saul, L. K. (2009). Kernel methods for deep learning. In Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C., and Culotta, A., editors, Proc. of NIPS, volume 22, pages 342-350.
  4. Frénay, B. and Verleysen, M. (2010). Using SVMs with randomised feature spaces: an extreme learning approach. In Proc. of ESANN, pages 315-320.
  5. Golub, G. H. and Van Loan, C. F. (1996). Matrix computations. The Johns Hopkins University Press.
  6. Guyon, I., Gunn, S. R., Ben-Hur, A., and Dror, G. (2004). Result analysis of the nips 2003 feature selection challenge. In Proc. of NIPS.
  7. Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. (2006). Extreme learning machine: Theory and applications. Neurocomputing, 70:489-501.
  8. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., and Lendasse, A. (2010). OP-ELM: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks, 21(1):158-162.
  9. Minka, T. (2001). Expectation propagation for approximate bayesian inference. In Proc. of UAI.
  10. Rao, C. R. (1972). Estimation of variance and covariance components in linear model. Journal of the American Statistical Association, 67(337):112-115.
  11. Rao, C. R. and Mitra, S. K. (1972). Generalized Inverse of Matrices and Its Applications. Wiley.
  12. Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. MIT Press.
  13. Torgerson, W. S. (1952). Multidimensional scaling: I. theory and method. Psychometrika, 17(4):401-419.
  14. Williams, C. K. I. (1998). Computation with infinite neural networks. Neural Computation, 10:1203-1216.
  15. Young, G. and Householder, A. S. (1938). Discussion of a set of points in terms of their mutual distances. Psychometrika, 3(1):19-22.
Download


Paper Citation


in Harvard Style

Parviainen E., Riihimäki J., Miche Y. and Lendasse A. (2010). INTERPRETING EXTREME LEARNING MACHINE AS AN APPROXIMATION TO AN INFINITE NEURAL NETWORK . In Proceedings of the International Conference on Knowledge Discovery and Information Retrieval - Volume 1: KDIR, (IC3K 2010) ISBN 978-989-8425-28-7, pages 65-73. DOI: 10.5220/0003071100650073


in Bibtex Style

@conference{kdir10,
author={Elina Parviainen and Jaakko Riihimäki and Yoan Miche and Amaury Lendasse},
title={INTERPRETING EXTREME LEARNING MACHINE AS AN APPROXIMATION TO AN INFINITE NEURAL NETWORK},
booktitle={Proceedings of the International Conference on Knowledge Discovery and Information Retrieval - Volume 1: KDIR, (IC3K 2010)},
year={2010},
pages={65-73},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003071100650073},
isbn={978-989-8425-28-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Knowledge Discovery and Information Retrieval - Volume 1: KDIR, (IC3K 2010)
TI - INTERPRETING EXTREME LEARNING MACHINE AS AN APPROXIMATION TO AN INFINITE NEURAL NETWORK
SN - 978-989-8425-28-7
AU - Parviainen E.
AU - Riihimäki J.
AU - Miche Y.
AU - Lendasse A.
PY - 2010
SP - 65
EP - 73
DO - 10.5220/0003071100650073