Abstract
In this paper, we present a novel multimodal emotion recognition technique that automatically learns decision-making parameters customized for each modality. Specifically, the process of decision-making is implemented in a multi-stage and collaborative fashion: Given a classifier for single modality, the classifier is regarded as a virtual expert since classification methods can make emotion recognition in accordance with certain expertise. Then, in the reputation equalization, the expert's classification capability is then quantitatively equalized to assure the reputation and/or confidence for each expert. To compromise decisions among experts, the final decision is obtained by calculating the weighted-sum of all the equalized reputation quantities, in such a way that the decision of one expert can be made in collaboration with that of the others. Moreover, to learn the proposed model parameters, the genetic algorithm is tailored and applied to alleviate the local minima problem during the process of finding an optimal solution. The experimental results have shown that the proposed collaborative decision-making model is effective in multimodal emotion recognition.
Original language | English |
---|---|
Title of host publication | 2013 IEEE International Conference on Multimedia and Expo, ICME 2013 |
DOIs | |
Publication status | Published - 2013 |
Event | 2013 IEEE International Conference on Multimedia and Expo, ICME 2013 - San Jose, CA, United States Duration: 2013 Jul 15 → 2013 Jul 19 |
Other
Other | 2013 IEEE International Conference on Multimedia and Expo, ICME 2013 |
---|---|
Country/Territory | United States |
City | San Jose, CA |
Period | 13-07-15 → 13-07-19 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Computer Science Applications