Braz, D. S. A., Ribas, M. M., Dedivitis, R. A., Nishimoto,
I. N., and Barros, A. P. B. (2005). Quality of life and
depression in patients undergoing total and partial
laryngectomy. Clinics, 60(2):135-142.
Bright, A. K., and Conventry, L. (2013). Assistive
technology for older adults: psychological and socio-
emotional design requirements. In Proceedings of 6
th
International Conference on PErvaesive Technologies
Related to Assistive Environments, pages 1-4, Rhodes,
Greece.
Cheah, L. A., Bai, J., Gonzalez, J. A., Ell, S. R., Gilbert, J.
M., Moore, R. K., and Green, P. D. (2015). A user-
centric design of permanent magnetic articulography
based assistive speech technology. In Proceedings of
8
th
BIOSIGNALS, pages 109-116, Lisbon, Portugal.
Denby, B., Schultz, T., Honda, K., Hueber, T., Gilbert, J.
M., and Brumberg, J. S. (2010). Silent speech
interfaces. Speech Communication, 52(4):270-287.
Doi, H., Nakamura, K., Toda, T., Saruwatari, H., and
Shikano, K. (2010). Esophageal speech enhancement
based on statistical voice conversion with Gaussian
mixture model. IEICE Transactions on Information
and Systems, 93(9):2472-2482.
Fagan, M. J., Ell, S. R., Gilbert, J. M., Sarrazin, E., and
Chapman, P. M. (2008). Development of a (silent)
speech recognition system for patients following
laryngectomy. Medical Engineering & Physics,
30(4):419-425.
Gilbert, J. M., Rybchenko, S. I., Hofe, R., Ell, S. R.,
Fagan, M. J., Moore, R. K. and Green, P. D. (2010).
Isolated word recognition of silent speech using
magnetic implants and sensors. Medical Engineering
& Physics, 32(10):1189-1197.
Gonzalez, J. A., Cheah, L. A., Bai, J., Ell, S. R., Gilbert, J.
M., Moore, R. K., and Green, P. D. (2014). Analysis
of phonetic similarity in a silent speech interface based
on permanent magnetic articulography. In Proceedings
of 15
th
INTERSPEECH, pages 1018-1022, Singapore.
Hirsch, T., Forlizzi, J., Goetz, J., Stoback, J., and Kurtx, C.
(2000). The ELDer project: Social and emotional
factors in the design of eldercare technologies. In
Proceedings on the 2000 conference of Universal
Usability, pages 72-79, Arlington, USA.
Hofe, R., Bai, J., Cheah, L. A., Ell, S. R., Gilbert, J. M.,
Moore, R. K., and Green, P. D. (2013a). Performance
of the MVOCA silent speech interface across multiple
speakers. In Proceedings of 14
th
INTERSPEECH,
pages 1140-1143, Lyon, France.
Hofe, R., Ell, S. R., Fagan, M. J., Gilbert, J. M., Green, P.
D., Moore, R. K., and Rybchenko, S. I. (2013b).
Small-vocabulary speech recognition using silent
speech interface based on magnetic sensing. Speech
Communication, 55(1):22-32.
Leonard, R. G. (1984). A database for speaker-
independent digit recognition. In Proceedings of 9
th
ICASSP, pages 328-331, San Diego, USA.
Lontis, E. R., Lund, M. E., Christensen, H. V., Gaihede,
M., Caltenco, H. A., and Andreasen-Strujik, L. N.
(2010). Clinical evaluation of wireless inductive
tongue computer interface for control of computers
and assistive devices. In Proceedings of International
Conference on Engineering in Medicine and Biology
Society, pages 3365-3368, Beunos Aires, Argentina.
Maier-Hein, L., Metze, F., Schultz, T., and Waibel, A.
(2005). Session independent non-audible speech
recognition using surface electromyography. In
Automatic Speech Recognition and Understanding
Workshop, pages 331-336, Cancun, Mexico.
Martin, J. L., Murphy, E., Crowe, J. A., and Norris, B. J.
(2006). Capturing user requirements in medical
devices development: the role of ergonomics.
Physiological Measurement, 27(8):49-62.
Park, H., Kiani, M., Lee, H. M., Kim, J., Block, J.,
Gosselin, B., and Ghovanloo, M. (2012). A wireless
magnetoresistive sensing system for an intraoral
tongue-computer interface. IEEE Transactions on
Biomedical Circuits and Systems, 6(6):571:585.
Rabiner, L. R. (1989). A tutorial on Hidden Markov
Models and selected applications in speech
recognition. Proceedings of the IEEE, 77:257-286.
Tang, H., and Beebe, D. J. (2006). An oral interface for
blind navigation. IEEE Transactions on Neural
Systems and Rehabilitation Engineering, 14(1):116-
123.
Toda, T., Black, A. W., and Tokuda, K. (2008). Statistical
mapping between articulatory movements and acoustic
spectrum using a Gaussian mixture model. Speech
Communication, 50(3): 215-227.
Toda, T., Nakagiri, M., and Shikano, K. (2012). Statistical
voice conversion techniques for body-conducted
unvoiced speech enhancement. IEEE Transactions on
Audio, Speech and Language Processing, 20(9):2505-
2517.
Toutios, A., and Margaritis, K. G. (2005). A support
vector approach to the acoustic-to-articulatory
mapping. In Proceedings of 6
th
INTERSPEECH, pages
3221-3224, Lisbon, Portugal.
Wand, M., and Schultz, T. (2011). Session-independent
EMG-based speech recognition. In Proceedings of 4
th
BIOSIGNALS, pages 295-300, Rome, Italy.
Wang, J., Samal, A., Green, J. R., and Rudzicz, F. (2012).
Sentence recognition from articulatory movements for
silent speech interfaces. In Proceedings of 37
th
ICASSP, pages 4985-4988, Kyoto, Japan.
Young, S., Everman, G., Gales, M., Hain, T., Kershaw, D.,
Liu, X., Moore, G., Odell, J., Ollason, D., Povery, D.,
Valtchev, V., and Woodland, P. (2009). The HTK
Book (for HTK Version 3.4.1). Cambridge: Cambridge
University Press.