Authors:
Saeideh Mirzaei
1
;
Pierrick Milhorat
2
;
Jérôme Boudy
3
;
Gérard Chollet
4
and
Mikko Kurimo
1
Affiliations:
1
Aalto University, Finland
;
2
Kyoto University, Japan
;
3
Telecom SudParis, France
;
4
Telecom ParisTech, France
Keyword(s):
Speech Recognition, Speaker Adaptation, Linear Regression, Vocal Tract.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Audio and Speech Processing
;
Digital Signal Processing
;
Incremental Learning
;
Multimedia
;
Multimedia Signal Processing
;
Pattern Recognition
;
Software Engineering
;
Telecommunications
;
Theory and Methods
Abstract:
To improve the performance of Automatic Speech Recognition (ASR) systems, the models must be retrained in order to better adjust to the speaker’s voice characteristics, the environmental and channel conditions or the context of the task. In this project we focus on the mismatch between the acoustic features used to train the model and the vocal characteristics of the front-end user of the system. To overcome this mismatch, speaker adaptation techniques have been used. A significant performance improvement has been shown using using constrained Maximum Likelihood Linear Regression (cMLLR) model adaptation methods, while a fast adaptation is guaranteed by using linear Vocal Tract Length Normalization (lVTLN).We have achieved a relative gain of approximately 9.44% in the word error rate with unsupervised cMLLR adaptation. We also compare our ASR system with the Google ASR and show that, using adaptation methods, we exceed its performance.