loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: M. Hamed Mozaffari ; Shuangyue Wen ; Nan Wang and WonSook Lee

Affiliation: School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Ontario and Canada

Keyword(s): Image Processing with Deep Learning, Ultrasound for Second Language Training, Ultrasound Video Tongue Contour Extraction and Tracking, Convolutional Neural Network, Augmented Reality for Pronunciation Training.

Related Ontology Subjects/Areas/Topics: Animation and Simulation ; Computer Vision, Visualization and Computer Graphics ; Computer-Supported Education ; e-Learning ; e-Learning Applications and Computer Graphics ; Graphical Interfaces ; Interactive Environments ; Real-Time Visual Simulation

Abstract: Ultrasound technology is safe, relatively affordable, and capable of real-time performance. Recently, it has been employed to visualize tongue function for second language education, where visual feedback of tongue motion complements conventional audio feedback. It requires expertise for non-expert users to recognize tongue shape in noisy and low-contrast ultrasound images. To alleviate this problem, tongue dorsum can be tracked and visualized automatically. However, the rapidity and complexity of tongue gestures as well as ultrasound low-quality images have made it a challenging task for real-time applications. The progress of deep convolutional neural networks has been successfully exploited in various computer vision applications such that it provides a promising alternative for real-time automatic tongue contour tracking in ultrasound video. In this paper, a guided language training system is proposed which benefits from our automatic segmentation approach to highlight tongue con tour region on ultrasound images and superimposing them on face profile of a language learner for better tongue localization. Assessments of the system revealed its flexibility and efficiency for training pronunciation of difficult words via tongue function visualization. Moreover, our tongue tracking technique demonstrates that it exceeds other methods in terms of performance and accuracy. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.142.255.23

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Mozaffari, M.; Wen, S.; Wang, N. and Lee, W. (2019). Real-time Automatic Tongue Contour Tracking in Ultrasound Video for Guided Pronunciation Training. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - GRAPP; ISBN 978-989-758-354-4; ISSN 2184-4321, SciTePress, pages 302-309. DOI: 10.5220/0007523503020309

@conference{grapp19,
author={M. Hamed Mozaffari. and Shuangyue Wen. and Nan Wang. and WonSook Lee.},
title={Real-time Automatic Tongue Contour Tracking in Ultrasound Video for Guided Pronunciation Training},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - GRAPP},
year={2019},
pages={302-309},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007523503020309},
isbn={978-989-758-354-4},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - GRAPP
TI - Real-time Automatic Tongue Contour Tracking in Ultrasound Video for Guided Pronunciation Training
SN - 978-989-758-354-4
IS - 2184-4321
AU - Mozaffari, M.
AU - Wen, S.
AU - Wang, N.
AU - Lee, W.
PY - 2019
SP - 302
EP - 309
DO - 10.5220/0007523503020309
PB - SciTePress