Authors:
João Freitas
1
;
António Teixeira
2
and
Miguel Sales Dias
3
Affiliations:
1
Microsoft Language Development Center, ISCTE-Lisbon University Institute and Universidade de Aveiro, Portugal
;
2
Universidade de Aveiro, Portugal
;
3
Microsoft Language Development Center and ISCTE-Lisbon University Institute, Portugal
Keyword(s):
Silent speech, Human-computer interface, European portuguese, Surface electromyography, Nasality.
Related
Ontology
Subjects/Areas/Topics:
Applications and Services
;
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computer Vision, Visualization and Computer Graphics
;
Data Manipulation
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Medical Image Detection, Acquisition, Analysis and Processing
;
Methodologies and Methods
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Physiological Processes and Bio-Signal Modeling, Non-Linear Dynamics
;
Sensor Networks
;
Soft Computing
;
Speech Recognition
Abstract:
A Silent Speech Interface (SSI) aims at performing Automatic Speech Recognition (ASR) in the absence of an intelligible acoustic signal. It can be used as a human-computer interaction modality in high-background-noise environments, such as living rooms, or in aiding speech-impaired individuals, increasing in prevalence with ageing. If this interaction modality is made available for users own native language, with adequate performance, and since it does not rely on acoustic information, it will be less susceptible to problems related to environmental noise, privacy, information disclosure and exclusion of speech impaired persons. To contribute to the existence of this promising modality for Portuguese, for which no SSI implementation is known, we are exploring and evaluating the potential of state-of-the-art approaches. One of the major challenges we face in SSI for European Portuguese is recognition of nasality, a core characteristic of this language Phonetics and Phonology. In t
his paper a silent speech recognition experiment based on Surface Electromyography is presented. Results confirmed recognition problems between minimal pairs of words that only differ on nasality of one of the phones, causing 50\% of the total error and evidencing accuracy performance degradation, which correlates well with the exiting knowledge.
(More)