Multi-modal Emotion Recognition Method based on Computational Intelligence Techniques

  • 黃 冠傑

Student thesis: Doctoral Thesis

Abstract

Affective communication is an essential daily interaction skill which is also desired for use in digital artifacts To learn this skill digital artifacts need to build a prerequisite ability of emotion recognition However it is still a challenging issue because people recognize emotions using a highly complex process influenced by many vague factors The technical solutions in digital world have not yet been able to fully mimic such process This study therefore proposes a collaborative emotion recognition method that is realized by multiple virtual experts who recognize human emotions from different perspectives (feature set) Each virtual expert is an autonomous agent implementing a specific recognition technique in the individual recognition stage and sharing its recognition result with others in the collaborative recognition stage For making an unbiased and high-accuracy collaborative decision the proposed method first performs a reputed equalization process for all individual recognition results from virtual experts The basis of this process is constructed by a genetic learning procedure on the reputations and decision styles of virtual experts The equalized results are then aggregated and compromised to obtain the final decision according to the authority of virtual experts To verify the proposed approach the numerical data the machine learning benchmark database the audio-visual eNTERFACE’05 emotion database and the Berlin database of emotional speech are used In experiments virtual experts respectively adopt full or partial combinations of the geometric feature-based and appearance-based facial features pitch and energy of voice speed of speech and Mel-Frequency Cepstrum Coefficients (MFCCs) are regarded as features whose full or partial combinations are referenced by the virtual experts participating in the emotion recognition experiments in this study The experimental results show that our approach is able to make better group decision
Date of Award2014 Jan 27
Original languageEnglish
SupervisorYau-Hwang Kuo (Supervisor)

Cite this

'