Authors:
Roman Sergienko
1
and
Elena Loseva
2
Affiliations:
1
Ulm University, Germany
;
2
Siberian State Aerospace University, Russian Federation
Keyword(s):
Emotion Recognition, Gender Identification, Neural Network, Multi-criteria Genetic Programming, Feature Selection, Speech Analysis.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Computational Intelligence
;
Evolutionary Computing
;
Genetic Algorithms
;
Human Factors & Human-System Interface
;
Human-Machine Interfaces
;
Hybrid Learning Systems
;
Industrial Engineering
;
Informatics in Control, Automation and Robotics
;
Intelligent Control Systems and Optimization
;
Optimization Algorithms
;
Optimization Problems in Signal Processing
;
Robotics and Automation
;
Signal Processing, Sensors, Systems Modeling and Control
;
Soft Computing
Abstract:
In supervised learning scenarios there are different existing methods for solving a task of feature selection for automatic speaker state analysis; many of them achieved reasonable results. Feature selection in unsupervised learning scenarios is a more complicated problem, due to the absence of class labels that would guide the search for relevant information. Supervised feature selection methods are “wrapper” techniques that require a learning algorithm to evaluate the candidate feature subsets; unsupervised feature selection methods are “filters” which are independent of any learning algorithm. However, they are usually performed separately from each other. In this paper, we propose a method which can be performed in supervised and unsupervised forms simultaneously based on multi-criteria evolutionary procedure which consists of two stages: self-adjusting multi-criteria genetic algorithm and self-adjusting multi-criteria genetic programming. The proposed approach was compared with
different methods for feature selection on four audio corpora for speaker emotion recognition and for speaker gender identification. The obtained results showed that the developed technique provides to increase emotion recognition performance by up to 46.5% and by up to 20.5% for the gender identification task in terms of accuracy.
(More)