Emotion recognition using acoustic features and textual content

Ze Jing Chuang, Chung-Hsien Wu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

44 Citations (Scopus)

Abstract

This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-three acoustic features are extracted from the speech input. After Principle Component Analysis (PCA), 14 principle components are selected for discriminative representation. In this representation each principle component is the combination of the 33 original acoustic features and forms a feature subspace. The Support Vector Machines (SVMs) are adopted to classify the emotional states. In text analysis, all emotional keywords and emotion modification words are manually defined. The emotion intensity levels of emotional keywords and emotion modification words are estimated from a collected emotion corpus. The final emotional state is determined based on the emotion outputs from the acoustic and textual approaches. The experimental result shows that the emotion recognition accuracy of the integrated system is better than each of the two individual approaches.

Original languageEnglish
Title of host publication2004 IEEE International Conference on Multimedia and Expo (ICME)
Pages53-56
Number of pages4
Volume1
Publication statusPublished - 2004
Event2004 IEEE International Conference on Multimedia and Expo (ICME) - Taipei, Taiwan
Duration: 2004 Jun 272004 Jun 30

Other

Other2004 IEEE International Conference on Multimedia and Expo (ICME)
CountryTaiwan
CityTaipei
Period04-06-2704-06-30

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Fingerprint Dive into the research topics of 'Emotion recognition using acoustic features and textual content'. Together they form a unique fingerprint.

Cite this