Authors:
Nicola Greggio
1
;
Alexandre Bernardino
2
and
José Santos-Victor
2
Affiliations:
1
ARTS Lab - Scuola Superiore S. Anna;Instituto Superior Técnico, Portugal
;
2
Instituto Superior Técnico, Portugal
Keyword(s):
Unsupervised learning, Self-adapting gaussian mixture, Expectation maximization, Machine learning, Clustering.
Related
Ontology
Subjects/Areas/Topics:
Informatics in Control, Automation and Robotics
;
Intelligent Control Systems and Optimization
;
Machine Learning in Control Applications
;
Optimization Algorithms
Abstract:
Split-and-merge techniques have been demonstrated to be effective in overtaking the convergence problems in classical EM. In this paper we follow a split-and-merge approach and we propose a new EM algorithm that makes use of a on-line variable number of mixture Gaussians components. We introduce a measure of the similarities to decide when to merge components. A set of adaptive thresholds keeps the number of mixture components close to optimal values. For sake of computational burden, our algorithm starts with a low initial number of Gaussians, adjusting it in runtime, if necessary. We show the effectivity of the method in a series of simulated experiments. Additionally, we illustrate the convergence rates of of the proposed algorithms with respect to the classical EM.