Abstract
This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-three acoustic features are extracted from the speech input. After Principle Component Analysis (PCA), 14 principle components are selected for discriminative representation. In this representation each principle component is the combination of the 33 original acoustic features and forms a feature subspace. The Support Vector Machines (SVMs) are adopted to classify the emotional states. In text analysis, all emotional keywords and emotion modification words are manually defined. The emotion intensity levels of emotional keywords and emotion modification words are estimated from a collected emotion corpus. The final emotional state is determined based on the emotion outputs from the acoustic and textual approaches. The experimental result shows that the emotion recognition accuracy of the integrated system is better than each of the two individual approaches.
Original language | English |
---|---|
Title of host publication | 2004 IEEE International Conference on Multimedia and Expo (ICME) |
Pages | 53-56 |
Number of pages | 4 |
Volume | 1 |
Publication status | Published - 2004 |
Event | 2004 IEEE International Conference on Multimedia and Expo (ICME) - Taipei, Taiwan Duration: 2004 Jun 27 → 2004 Jun 30 |
Other
Other | 2004 IEEE International Conference on Multimedia and Expo (ICME) |
---|---|
Country | Taiwan |
City | Taipei |
Period | 04-06-27 → 04-06-30 |
All Science Journal Classification (ASJC) codes
- Engineering(all)