Learning collaborative decision-making parameters for multimodal emotion recognition

Kuan Chieh Huang, Hsueh Yi Sean Lin, Jyh Chian Chan, Yau-Hwang Kuo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Citations (Scopus)

Abstract

In this paper, we present a novel multimodal emotion recognition technique that automatically learns decision-making parameters customized for each modality. Specifically, the process of decision-making is implemented in a multi-stage and collaborative fashion: Given a classifier for single modality, the classifier is regarded as a virtual expert since classification methods can make emotion recognition in accordance with certain expertise. Then, in the reputation equalization, the expert's classification capability is then quantitatively equalized to assure the reputation and/or confidence for each expert. To compromise decisions among experts, the final decision is obtained by calculating the weighted-sum of all the equalized reputation quantities, in such a way that the decision of one expert can be made in collaboration with that of the others. Moreover, to learn the proposed model parameters, the genetic algorithm is tailored and applied to alleviate the local minima problem during the process of finding an optimal solution. The experimental results have shown that the proposed collaborative decision-making model is effective in multimodal emotion recognition.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Multimedia and Expo, ICME 2013
DOIs
Publication statusPublished - 2013
Event2013 IEEE International Conference on Multimedia and Expo, ICME 2013 - San Jose, CA, United States
Duration: 2013 Jul 152013 Jul 19

Other

Other2013 IEEE International Conference on Multimedia and Expo, ICME 2013
Country/TerritoryUnited States
CitySan Jose, CA
Period13-07-1513-07-19

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Learning collaborative decision-making parameters for multimodal emotion recognition'. Together they form a unique fingerprint.

Cite this