Authors:
Alejandro Antonio Torres García
;
Carlos Alberto Reyes García
and
Luis Villaseñor Pineda
Affiliation:
National Institute for Astrophysics Optics and Electronics, Mexico
Keyword(s):
Silent Speech Interfaces (SSI), Electroencephalograms (EEG), Unspoken Speech, DiscreteWavelet Transform (DWT), Classification.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Data Manipulation
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Soft Computing
;
Speech Recognition
;
Wavelet Transform
Abstract:
This work aims to interpret the EEG signals associated with actions to imagine the pronunciation of words
that belong to a reduced vocabulary without moving the articulatory muscles and without uttering any audible
sound (unspoken speech). Specifically, the vocabulary reflects movements to control the cursor on the
computer. We have recorded EEG signals from 21 subjects using a markers based basic protocol. The discrete
wavelet transform (DWT) is used to extract features from the delimited windows, and a subset of them
with frequency ranges below 32 Hz is further selected. These subsets are used to train four classifiers: Naive
Bayes (NB), Random Forests (RF), support vector machine (SVM), and Bagging-RF. The results are still preliminary
but encouraging because the accuracy rates are above 20%, i.e. up to chance for five classes. The
implementation process as well as some experiments with their corresponding results are shown.