Authors:
Marie-Christine Suhner
and
Philippe Thomas
Affiliation:
Université de Lorraine and CNRS, France
Keyword(s):
Neural Network, Extreme Learning Machine, Multilayer Perceptron, Parameters Initialization, Randomly Fixed Hidden Neurons.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Learning Paradigms and Algorithms
;
Methodologies and Methods
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Theory and Methods
Abstract:
Neural network is a well-known tool able to learn model from data with a good accuracy. However, this tool
suffers from an important computational time which may be too expansive. One alternative is to fix the
weights and biases connecting the input to the hidden layer. This approach has been denoted recently
extreme learning machine (ELM) which is able to learn quickly a model. Multilayers perceptron and ELM
have identical structure, the main difference is that only the parameters linking hidden to output layers are
learned. The weights and biases which connect the input to the hidden layers are randomly chosen and they
don’t evolved during the learning. The impact of the choice of these random parameters on the model
accuracy is not studied in the literature. This paper draws on extensive literature concerning the feedforward
neural networks initialization problem. Different feedforward neural network initialisation algorithms are
recalled, and used for the determination o
f ELM parameters connecting input to hidden layers. These
algorithms are tested and compared on several regression benchmark problems.
(More)