Authors:
Héctor F. Satizábal M.
and
Andres Perez-Uriibe
Affiliation:
University of Applied Sciences of Western Switzerland (HEIG-VD), Switzerland
Keyword(s):
Catastrophic Interference, Incremental task, Incremental learning, Sequential learning, Growing neural gas.
Related
Ontology
Subjects/Areas/Topics:
Adaptive Architectures and Mechanisms
;
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Learning Paradigms and Algorithms
;
Methodologies and Methods
;
Neural Based Data Mining and Complex Information Processing
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Stability and Instability in Artificial Neural Networks
;
Theory and Methods
Abstract:
Creating computational models from large and growing datasets is an important issue in current machine learning research, because most modelling approaches can require prohibitive computational resources. This work presents the use of incremental learning algorithms within the framework of an incremental modelling approach. In particular, it presents the GNG-m algorithm, an adaptation of the Growing Neural Gas algorithm (GNG), capable of circumventing the problem of catastrophic forgetting when modelling large datasets in a sequential manner. We illustrate this by comparing the performance of GNG-m with that of the original GNG algorithm, on a vector quantization task. Last but not least, we present the use of GNG-m in an incremental modelling task using a real-world database of temperature, coming from a geographic information system (GIS). The dataset of more than one million multidimensional observations is split in seven parts and then reduced by vector quantization to a codebook
of only thousands of prototypes.
(More)