An articulation training system with intelligent interface and multimode feedbacks to articulation disorders

Yeou Jiunn Chen, Jiunn-Liang Wu, Hui Mei Yang, Chung-Hsien Wu, Chih Chang Chen, Shan Shan Ju

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Articulation training with many kinds of stimulus and messages such as visual, voice, and articulatory information can teach user to pronounce correctly and improve user's articulatory ability. In this paper, an articulation training system with intelligent interface and multimode feedbacks is proposed to improve the performance of articulation training. Dependent network is designed to model clinical knowledge of speech-language pathologists used in speech evaluation Automatic speech recognition with dependent network is then apply to identify the pronunciation errors. Besides, hierarchical Bayesian network is proposed to recognize user's emotion from speeches. With the information of pronunciation errors and user's emotion, the articulation training sentences can be dynamically selected. Finally, a 3D facial animation is provided to teach users to pronounce a sentence by using speech, lip motion, and tongue motion. Experimental results reveal the usefulness of proposed method and system.

Original languageEnglish
Title of host publication2009 International Conference on Asian Language Processing
Subtitle of host publicationRecent Advances in Asian Language Processing, IALP 2009
Pages3-6
Number of pages4
DOIs
Publication statusPublished - 2009 Dec 1
Event2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009 - Singapore, Singapore
Duration: 2009 Dec 72009 Dec 9

Publication series

Name2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009

Other

Other2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009
CountrySingapore
CitySingapore
Period09-12-0709-12-09

Fingerprint

emotion
stimulus
Articulation
ability
language
evaluation
performance
Emotion
Teaching
Stimulus
Tongue
Bayesian Networks
Evaluation
Animation
Usefulness
Speech-language Pathologists
Automatic Speech Recognition

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Linguistics and Language

Cite this

Chen, Y. J., Wu, J-L., Yang, H. M., Wu, C-H., Chen, C. C., & Ju, S. S. (2009). An articulation training system with intelligent interface and multimode feedbacks to articulation disorders. In 2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009 (pp. 3-6). [5380791] (2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009). https://doi.org/10.1109/IALP.2009.10
Chen, Yeou Jiunn ; Wu, Jiunn-Liang ; Yang, Hui Mei ; Wu, Chung-Hsien ; Chen, Chih Chang ; Ju, Shan Shan. / An articulation training system with intelligent interface and multimode feedbacks to articulation disorders. 2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009. 2009. pp. 3-6 (2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009).
@inproceedings{0831baf18921456f84b6b5b4097f8ff7,
title = "An articulation training system with intelligent interface and multimode feedbacks to articulation disorders",
abstract = "Articulation training with many kinds of stimulus and messages such as visual, voice, and articulatory information can teach user to pronounce correctly and improve user's articulatory ability. In this paper, an articulation training system with intelligent interface and multimode feedbacks is proposed to improve the performance of articulation training. Dependent network is designed to model clinical knowledge of speech-language pathologists used in speech evaluation Automatic speech recognition with dependent network is then apply to identify the pronunciation errors. Besides, hierarchical Bayesian network is proposed to recognize user's emotion from speeches. With the information of pronunciation errors and user's emotion, the articulation training sentences can be dynamically selected. Finally, a 3D facial animation is provided to teach users to pronounce a sentence by using speech, lip motion, and tongue motion. Experimental results reveal the usefulness of proposed method and system.",
author = "Chen, {Yeou Jiunn} and Jiunn-Liang Wu and Yang, {Hui Mei} and Chung-Hsien Wu and Chen, {Chih Chang} and Ju, {Shan Shan}",
year = "2009",
month = "12",
day = "1",
doi = "10.1109/IALP.2009.10",
language = "English",
isbn = "9780769539041",
series = "2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009",
pages = "3--6",
booktitle = "2009 International Conference on Asian Language Processing",

}

Chen, YJ, Wu, J-L, Yang, HM, Wu, C-H, Chen, CC & Ju, SS 2009, An articulation training system with intelligent interface and multimode feedbacks to articulation disorders. in 2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009., 5380791, 2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009, pp. 3-6, 2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009, Singapore, Singapore, 09-12-07. https://doi.org/10.1109/IALP.2009.10

An articulation training system with intelligent interface and multimode feedbacks to articulation disorders. / Chen, Yeou Jiunn; Wu, Jiunn-Liang; Yang, Hui Mei; Wu, Chung-Hsien; Chen, Chih Chang; Ju, Shan Shan.

2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009. 2009. p. 3-6 5380791 (2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - An articulation training system with intelligent interface and multimode feedbacks to articulation disorders

AU - Chen, Yeou Jiunn

AU - Wu, Jiunn-Liang

AU - Yang, Hui Mei

AU - Wu, Chung-Hsien

AU - Chen, Chih Chang

AU - Ju, Shan Shan

PY - 2009/12/1

Y1 - 2009/12/1

N2 - Articulation training with many kinds of stimulus and messages such as visual, voice, and articulatory information can teach user to pronounce correctly and improve user's articulatory ability. In this paper, an articulation training system with intelligent interface and multimode feedbacks is proposed to improve the performance of articulation training. Dependent network is designed to model clinical knowledge of speech-language pathologists used in speech evaluation Automatic speech recognition with dependent network is then apply to identify the pronunciation errors. Besides, hierarchical Bayesian network is proposed to recognize user's emotion from speeches. With the information of pronunciation errors and user's emotion, the articulation training sentences can be dynamically selected. Finally, a 3D facial animation is provided to teach users to pronounce a sentence by using speech, lip motion, and tongue motion. Experimental results reveal the usefulness of proposed method and system.

AB - Articulation training with many kinds of stimulus and messages such as visual, voice, and articulatory information can teach user to pronounce correctly and improve user's articulatory ability. In this paper, an articulation training system with intelligent interface and multimode feedbacks is proposed to improve the performance of articulation training. Dependent network is designed to model clinical knowledge of speech-language pathologists used in speech evaluation Automatic speech recognition with dependent network is then apply to identify the pronunciation errors. Besides, hierarchical Bayesian network is proposed to recognize user's emotion from speeches. With the information of pronunciation errors and user's emotion, the articulation training sentences can be dynamically selected. Finally, a 3D facial animation is provided to teach users to pronounce a sentence by using speech, lip motion, and tongue motion. Experimental results reveal the usefulness of proposed method and system.

UR - http://www.scopus.com/inward/record.url?scp=77950871715&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77950871715&partnerID=8YFLogxK

U2 - 10.1109/IALP.2009.10

DO - 10.1109/IALP.2009.10

M3 - Conference contribution

SN - 9780769539041

T3 - 2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009

SP - 3

EP - 6

BT - 2009 International Conference on Asian Language Processing

ER -

Chen YJ, Wu J-L, Yang HM, Wu C-H, Chen CC, Ju SS. An articulation training system with intelligent interface and multimode feedbacks to articulation disorders. In 2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009. 2009. p. 3-6. 5380791. (2009 International Conference on Asian Language Processing: Recent Advances in Asian Language Processing, IALP 2009). https://doi.org/10.1109/IALP.2009.10