Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps

Jia Ching Wang, Yu Hao Chin, Bo Wei Chen, Chang Hong Lin, Chung Hsien Wu

研究成果: Article

7 引文 (Scopus)

摘要

This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.

原文English
文章編號7114224
頁(從 - 到)1552-1562
頁數11
期刊IEEE Transactions on Audio, Speech and Language Processing
23
發行號10
DOIs
出版狀態Published - 2015 十月 1

指紋

emotions
Atoms
coefficients
Glossaries
Feature extraction
dictionaries
Classifiers
Acoustic waves
classifiers
pattern recognition
atoms
acoustics
Experiments

All Science Journal Classification (ASJC) codes

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

引用此文

Wang, Jia Ching ; Chin, Yu Hao ; Chen, Bo Wei ; Lin, Chang Hong ; Wu, Chung Hsien. / Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps. 於: IEEE Transactions on Audio, Speech and Language Processing. 2015 ; 卷 23, 編號 10. 頁 1552-1562.
@article{e380aac63f854393a5261e047c046db5,
title = "Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps",
abstract = "This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61{\%}. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.",
author = "Wang, {Jia Ching} and Chin, {Yu Hao} and Chen, {Bo Wei} and Lin, {Chang Hong} and Wu, {Chung Hsien}",
year = "2015",
month = "10",
day = "1",
doi = "10.1109/TASLP.2015.2438535",
language = "English",
volume = "23",
pages = "1552--1562",
journal = "IEEE Transactions on Speech and Audio Processing",
issn = "1558-7916",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "10",

}

Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps. / Wang, Jia Ching; Chin, Yu Hao; Chen, Bo Wei; Lin, Chang Hong; Wu, Chung Hsien.

於: IEEE Transactions on Audio, Speech and Language Processing, 卷 23, 編號 10, 7114224, 01.10.2015, p. 1552-1562.

研究成果: Article

TY - JOUR

T1 - Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps

AU - Wang, Jia Ching

AU - Chin, Yu Hao

AU - Chen, Bo Wei

AU - Lin, Chang Hong

AU - Wu, Chung Hsien

PY - 2015/10/1

Y1 - 2015/10/1

N2 - This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.

AB - This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.

UR - http://www.scopus.com/inward/record.url?scp=84933508905&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84933508905&partnerID=8YFLogxK

U2 - 10.1109/TASLP.2015.2438535

DO - 10.1109/TASLP.2015.2438535

M3 - Article

AN - SCOPUS:84933508905

VL - 23

SP - 1552

EP - 1562

JO - IEEE Transactions on Speech and Audio Processing

JF - IEEE Transactions on Speech and Audio Processing

SN - 1558-7916

IS - 10

M1 - 7114224

ER -