Fritzell, B., 1969. The velopharyngeal muscles in speech:
an electromyographic and cineradiographic study.
Acta Otolaryngolica. Suppl. 50.
Galatas, G., Potamianos, G., Makedon, F., 2012. Audio-
visual speech recognition incorporating facial depth
information captured by the Kinect. Proceedings of the
20th European Signal Processing Conference
(EUSIPCO), pp. 2714-2717.
Hardcastle, W. J., 1976. Physiology of Speech Production
- An Introduction for Speech Scientists. Academic
Press, London.
Herff, C. Janke, M. Wand, M. and Schultz, T., 2011.
Impact of Different Feedback Mechanisms in EMG-
based Speech Recognition. In Proceedings of
Interspeech 2011. Florence, Italy.
Kalgaonkar, K., Raj B., Hu., R., 2007. Ultrasonic doppler
for voice activity detection. IEEE Signal Processing
Letters, vol.14, Issue 10, pp. 754–757.
Kalgaonkar, K., Raj., B., 2008. Ultrasonic doppler sensor
for speaker recognition. Internat. Conf. on Acoustics,
Speech, and Signal Processing.
Kuehn D.P., Folkins JW, Cutting CB., 1982. Relationships
between muscle activity and velar position, Cleft
Palate Journal, Vol. 19, Issue 1, pp. 25-35.
Levelt. W., 1989. Speaking: from Intention to Articulation.
Cambridge, Mass.: MIT Press.
Lubker, J. F., 1968. An electromyographic-
cinefluorographic investigation of velar function
during normal speech production. Cleft Palate
Journal, Vol. 5, Issue 1, pp. 17.
Martins, P. Carbone, I. Pinto, A. Silva, A. and Teixeira,
A., 2008. European Portuguese MRI based speech
production studies. Speech Communication. NL:
Elsevier, Vol.50, No.11/12, ISSN 0167-6393, pp. 925–
952.
McGill, S., Juker, D. and Kropf, P., 1996. Appropriately
placed surface EMG electrodes reflect deep muscle
activity (psoas, quadratus lumborum, abdominal wall)
in the lumbar spine. In Journal of Biomechanics, Vol.
29 Issue, 11, pp. 1503-7.
Microsoft Kinect, Online: http://www.xbox.com/en-
US/kinect, accessed on 9 December 2013.
Patil, S. A. and Hansen, J. H. L., 2010. The physiological
microphone (PMIC): A competitive alternative for
speaker assessment in stress detection and speaker
verification. Speech Communication. Vol. 52, Issue 4,
pp. 327-340.
Pêra, V. Moura, A. and Freitas, D. 2004. LPFAV2: a new
multi-modal database for developing speech
recognition systems for an assistive technology
application. In SPECOM-2004, pp. 73-76.
Phang, C. W., Sutanto, J., Kankanhalli, A., Li, Y., Tan, B.
C. Y, and Teo, H. H., 2006. Senior citizens’
acceptance of information systems: A study in the
context of e-government services. IEEE Transactions
On. Engineering Management, Vol. 53, Issue 4, pp.
555–569, 2006.
Plux Wireless Biosignals, Portugal,
Online:
http://www.plux.info/, accessed on 9 December 2013.
Porbadnigk, A., Wester, M., Calliess, J. and Schultz, T.,
2009. EEG-based speech recognition impact of
temporal effects. International Conference on Bio-
inspired Systems and Signal Processing, Biosignals
2009, Porto, Portugal, pp.376–381.
Quatieri, T. F., D. Messing, K. Brady, W. B. Campbell, J.
P. Campbell, M. Brandstein, C. J. Weinstein, J. D.
Tardelli and P. D. Gatewood, 2006. Exploiting non-
acoustic sensors for speech enhancement. IEEE Trans.
Audio Speech Lang. Process, Vol. 14, Issue 2, pp.
533–544.
Rossato, S. Teixeira, A. and Ferreira, L., 2006. Les
Nasales du Portugais et du Français: une étude
comparative sur les données EMMA. In XXVI
Journées d'Études de la Parole. Dinard, France.
Sá, F. Afonso, P. Ferreira, R. and Pera, V., 2003.
Reconhecimento Automático de Fala Contínua em
Português Europeu Recorrendo a Streams Audio-
Visuais. In The Proceedings of COOPMEDIA'2003 -
Workshop de Sistemas de Informação Multimédia,
Cooperativos e Distribuídos, Porto, Portugal.
Schultz, T. and Wand, M., 2010. Modeling coarticulation
in large vocabulary EMG-based speech recognition.
Speech Communication, Vol. 52, Issue 4, pp. 341-353.
Seikel, J. A., King, D. W., Drumright, D. G., 2010.
Anatomy and Physiology for Speech, Language, and
Hearing, 4rd Ed., Delmar Learning.
Srinivasan, S., Raj, B. and Ezzat, T., 2010. Ultrasonic
sensing for robust speech recognition. Internat. Conf.
on Acoustics, Speech, and Signal Processing 2010.
Teixeira, A. and Vaz, F., 2000. Síntese Articulatória dos
Sons Nasais do Português. Anais do V Encontro para
o Processamento Computacional da Língua
Portuguesa Escrita e Falada (PROPOR), ICMC-USP,
Atibaia, São Paulo, Brasil, 2000, pp. 183-193.
Teixeira, A. and Vaz, F., 2001. European Portuguese
Nasal Vowels: An EMMA Study. 7th European
Conference on Speech Communication and
Technology, EuroSpeech – Scandinavia, pp. 1843-
1846.
Teixeira, A., Braga, D., Coelho, L., Fonseca, J.,
Alvarelhão, J., Martín, I., Queirós, A., Rocha, N.,
Calado, A. and Dias, M. S., 2009. Speech as the Basic
Interface for Assistive Technology. DSAI 2009 -
Proceedings of the 2th International Conference on
Software Development for Enhancing Accessibility
and Fighting Info-Exclusion, Porto Salvo, Portugal.
Teixeira, A., Martins, P., Oliveira, C., Ferreira, C., Silva,
A., Shosted, R., 2012. “Real-time MRI for Portuguese:
database, methods and applications”, Proceedings
of PROPOR 2012, LNCS vol. 7243. pp. 306-317.
Toda, T., Nakamura, K., Nagai, T., Kaino, T., Nakajima,
Y., and Shikano, K., Technologies for Processing
Body-Conducted Speech Detected with Non-Audible
Murmur Microphone. Proceedings of Interspeech
2009, Brighton, UK.
Toth, A. R., Kalgaonkar, K., Raj, B., Ezzat, T., 2010.
Synthesizing speech from Doppler signals, Internat.
Conference on Acoustics Speech and Signal
Processing
, pp.4638-4641.
BIOSTEC2014-DoctoralConsortium
26