Audio-visual emotion recognition using semi-coupled HMM and error-weighted classifier combination

Jen Chun Lin, Chung-Hsien Wu, Wen Li Wei, Chia Jui Liu

研究成果: Conference contribution

摘要

This paper presents an approach to automatic recognition of emotional states from audio-visual bimodal signals using semi-coupled hidden Markov model and error weighted classifier combination for Human-Computer Interaction (HCI). The proposed model combines a simplified state-based bimodal alignment strategy and a Bayesian classifier weighting scheme to obtain the optimal solution for audio-visual bimodal fusion. The state-based bimodal alignment strategy is proposed to align the temporal relation of the states between audio and visual streams. The Bayesian classifier weighting scheme is adopted to explore the contributions of different audio-visual feature pairs for emotion recognition. For performance evaluation, audio-visual signals with four emotional states (happy, neutral, angry and sad) were collected. Each of the invited four subjects was asked to utter 10 sentences to generate emotional speech and facial expression for each emotion. Experimental results show the efficiency and effectiveness of the proposed method.

原文English
主出版物標題APSIPA ASC 2010 - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference
頁面903-906
頁數4
出版狀態Published - 2010
事件2nd Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2010 - Biopolis, Singapore
持續時間: 2010 12月 142010 12月 17

Other

Other2nd Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2010
國家/地區Singapore
城市Biopolis
期間10-12-1410-12-17

All Science Journal Classification (ASJC) codes

  • 電腦網路與通信
  • 資訊系統

指紋

深入研究「Audio-visual emotion recognition using semi-coupled HMM and error-weighted classifier combination」主題。共同形成了獨特的指紋。

引用此