loading
Documents

Research.Publish.Connect.

Paper

Authors: M. Mozaffari ; Shuangyue Wen ; Nan Wang and WonSook Lee

Affiliation: School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Ontario, Canada

ISBN: 978-989-758-354-4

Keyword(s): Image Processing with Deep Learning, Ultrasound for Second Language Training, Ultrasound Video Tongue Contour Extraction and Tracking, Convolutional Neural Network, Augmented Reality for Pronunciation Training.

Abstract: Ultrasound technology is safe, relatively affordable, and capable of real-time performance. Recently, it has been employed to visualize tongue function for second language education, where visual feedback of tongue motion complements conventional audio feedback. It requires expertise for non-expert users to recognize tongue shape in noisy and low-contrast ultrasound images. To alleviate this problem, tongue dorsum can be tracked and visualized automatically. However, the rapidity and complexity of tongue gestures as well as ultrasound low-quality images have made it a challenging task for real-time applications. The progress of deep convolutional neural networks has been successfully exploited in various computer vision applications such that it provides a promising alternative for real-time automatic tongue contour tracking in ultrasound video. In this paper, a guided language training system is proposed which benefits from our automatic segmentation approach to highlight tongue cont our region on ultrasound images and superimposing them on face profile of a language learner for better tongue localization. Assessments of the system revealed its flexibility and efficiency for training pronunciation of difficult words via tongue function visualization. Moreover, our tongue tracking technique demonstrates that it exceeds other methods in terms of performance and accuracy. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 35.173.234.237

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Mozaffari, M.; Wen, S.; Wang, N. and Lee, W. (2019). Real-time Automatic Tongue Contour Tracking in Ultrasound Video for Guided Pronunciation Training.In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, ISBN 978-989-758-354-4, pages 302-309. DOI: 10.5220/0007523503020309

@conference{grapp19,
author={M. Hamed Mozaffari. and Shuangyue Wen. and Nan Wang. and WonSook Lee.},
title={Real-time Automatic Tongue Contour Tracking in Ultrasound Video for Guided Pronunciation Training},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP,},
year={2019},
pages={302-309},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007523503020309},
isbn={978-989-758-354-4},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP,
TI - Real-time Automatic Tongue Contour Tracking in Ultrasound Video for Guided Pronunciation Training
SN - 978-989-758-354-4
AU - Mozaffari, M.
AU - Wen, S.
AU - Wang, N.
AU - Lee, W.
PY - 2019
SP - 302
EP - 309
DO - 10.5220/0007523503020309

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.