Co-articulation generation using maximum direction change and apparent motion for Chinese visual speech synthesis

Chung-Hsien Wu, Chung Han Lee, Ze Jing Chuang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)


This study presents an approach for automated lip synchronization and smoothing for Chinese visual speech synthesis. A facial animation system with synchronization algorithm is also developed to visualize an existent Text-To-Speech system. Motion parameters for each viseme are first constructed from video footage of a human speaker. To synchronize the parameter set sequence and speech signal, a maximum direction change algorithm is also proposed to select significant parameter set sequences according to the speech duration. Moreover, to improve the smoothness of co-articulation part under a high speaking rate, four phoneme-dependent co-articulation functions are generated by integrating the Bernstein-Bézier curve and apparent motion property. A Chinese visual speech synthesis system is built to evaluate the proposed approach. The synthesis result of the proposed system is compared to the real speaker. The coarticulation generated by the proposed approach is also evaluated.

Original languageEnglish
Title of host publicationICS 2010 - International Computer Symposium
Number of pages6
Publication statusPublished - 2010 Dec 1
Event2010 International Computer Symposium, ICS 2010 - Tainan, Taiwan
Duration: 2010 Dec 162010 Dec 18

Publication series

NameICS 2010 - International Computer Symposium


Other2010 International Computer Symposium, ICS 2010

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Fingerprint Dive into the research topics of 'Co-articulation generation using maximum direction change and apparent motion for Chinese visual speech synthesis'. Together they form a unique fingerprint.

Cite this