Mixmodel driven 3D facial synthesis to computer-aided articulation training

Yeou Jiunn Chen, F. C. Liao, J. L. Wu, H. M. Yang, C. H. Wu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

3D facial animation had been widely used in many multimedia applications and could be applied to help articulation disorders in articulatory training. In this paper, a mixmodel driven 3D facial synthesis including lip and tongue animation is proposed to provide multi-model feedback such as speech signal, lip motion, and tongue motion. Text-to-speech can generate the speech signal of arbitrary text and provide syllable boundaries. Contextual knowledge based phoneme segmentation is applied to estimate the phoneme boundaries in a syllable and the number of 3D facial models can be effectively reduced. Parametric 3D tongue and lip movement models are smoothed by B-spline to eliminate the jerkiness and synthesize the 3D facial with tongue and lip animation. Integrating boundary information, the speech-synching can be easily accomplished. The mult-model feedbacks in 3D facial animation are used to improve the efficiency of articulatory training. The preliminary experimental results show that this method is feasible.

Original languageEnglish
Title of host publication4th Kuala Lumpur International Conference on Biomedical Engineering 2008, Biomed 2008
PublisherSpringer Verlag
Pages56-60
Number of pages5
Edition1
ISBN (Print)9783540691389
DOIs
Publication statusPublished - 2008

Publication series

NameIFMBE Proceedings
Number1
Volume21 IFMBE
ISSN (Print)1680-0737

All Science Journal Classification (ASJC) codes

  • Bioengineering
  • Biomedical Engineering

Fingerprint Dive into the research topics of 'Mixmodel driven 3D facial synthesis to computer-aided articulation training'. Together they form a unique fingerprint.

Cite this