TY - JOUR
T1 - Speech Emotion Recognition Considering Nonverbal Vocalization in Affective Conversations
AU - Hsu, Jia Hao
AU - Su, Ming Hsiang
AU - Wu, Chung Hsien
AU - Chen, Yi Hsuan
N1 - Funding Information:
Manuscript received April 24, 2020; revised August 31, 2020 and January 29, 2021; accepted April 16, 2021. Date of publication April 30, 2021; date of current version May 17, 2021. This work was supported by the Ministry of Science and Technology, Taiwan, under Contract MOST 108-2221-E-006-103-MY3. The Associate Editor coordinating the review of this manuscript and approving it for publication was Dr. Kathy Jackson. (Corresponding author: Chung-Hsien Wu.) The authors are with the Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 70101, Taiwan (e-mail: [email protected]; [email protected]; [email protected]; cyhkelvin0530 @gmail.com). Digital Object Identifier 10.1109/TASLP.2021.3076364 Fig. 1. Relationship between emotion categories and sound types for the data in NNIME and BAUM-1.
Publisher Copyright:
© 2014 IEEE.
PY - 2021
Y1 - 2021
N2 - In real-life communication, nonverbal vocalization such as laughter, cries or other emotion interjections, within an utterance play an important role for emotion expression. In previous studies, only few emotion recognition systems consider nonverbal vocalization, which naturally exists in our daily conversation. In this work, both verbal and nonverbal sounds within an utterance are considered for emotion recognition of real-life affective conversations. Firstly, a support vector machine (SVM)-based verbal and nonverbal sound detector is developed. A prosodic phrase auto-tagger is further employed to extract the verbal/nonverbal sound segments. For each segment, the emotion and sound feature embeddings are respectively extracted using the deep residual networks (ResNets). Finally, a sequence of the extracted feature embeddings for the entire dialog turn are fed to an attentive long short-term memory (LSTM)-based sequence-to-sequence model to output an emotional sequence as recognition result. The NNIME corpus (The NTHU-NTUA Chinese interactive multimodal emotion corpus), which consists of verbal and nonverbal sounds, was adopted for system training and testing. 4766 single speaker dialogue turns in the audio data of the NNIME corpus were selected for evaluation. The experimental results showed that nonverbal vocalization was helpful for speech emotion recognition. For comparison, the proposed method based on decision-level fusion achieved an accuracy of 61.92% for speech emotion recognition outperforming the traditional methods as well as the feature-level and model-level fusion approaches.
AB - In real-life communication, nonverbal vocalization such as laughter, cries or other emotion interjections, within an utterance play an important role for emotion expression. In previous studies, only few emotion recognition systems consider nonverbal vocalization, which naturally exists in our daily conversation. In this work, both verbal and nonverbal sounds within an utterance are considered for emotion recognition of real-life affective conversations. Firstly, a support vector machine (SVM)-based verbal and nonverbal sound detector is developed. A prosodic phrase auto-tagger is further employed to extract the verbal/nonverbal sound segments. For each segment, the emotion and sound feature embeddings are respectively extracted using the deep residual networks (ResNets). Finally, a sequence of the extracted feature embeddings for the entire dialog turn are fed to an attentive long short-term memory (LSTM)-based sequence-to-sequence model to output an emotional sequence as recognition result. The NNIME corpus (The NTHU-NTUA Chinese interactive multimodal emotion corpus), which consists of verbal and nonverbal sounds, was adopted for system training and testing. 4766 single speaker dialogue turns in the audio data of the NNIME corpus were selected for evaluation. The experimental results showed that nonverbal vocalization was helpful for speech emotion recognition. For comparison, the proposed method based on decision-level fusion achieved an accuracy of 61.92% for speech emotion recognition outperforming the traditional methods as well as the feature-level and model-level fusion approaches.
UR - http://www.scopus.com/inward/record.url?scp=85105114915&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105114915&partnerID=8YFLogxK
U2 - 10.1109/TASLP.2021.3076364
DO - 10.1109/TASLP.2021.3076364
M3 - Article
AN - SCOPUS:85105114915
SN - 2329-9290
VL - 29
SP - 1675
EP - 1686
JO - IEEE/ACM Transactions on Audio Speech and Language Processing
JF - IEEE/ACM Transactions on Audio Speech and Language Processing
M1 - 9420285
ER -