Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps

Jia Ching Wang, Yu Hao Chin, Bo Wei Chen, Chang Hong Lin, Chung-Hsien Wu

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.

Original languageEnglish
Article number7114224
Pages (from-to)1552-1562
Number of pages11
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume23
Issue number10
DOIs
Publication statusPublished - 2015 Oct 1

Fingerprint

emotions
Atoms
coefficients
Glossaries
Feature extraction
dictionaries
Classifiers
Acoustic waves
classifiers
pattern recognition
atoms
acoustics
Experiments

All Science Journal Classification (ASJC) codes

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

Cite this

@article{e380aac63f854393a5261e047c046db5,
title = "Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps",
abstract = "This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61{\%}. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.",
author = "Wang, {Jia Ching} and Chin, {Yu Hao} and Chen, {Bo Wei} and Lin, {Chang Hong} and Chung-Hsien Wu",
year = "2015",
month = "10",
day = "1",
doi = "10.1109/TASLP.2015.2438535",
language = "English",
volume = "23",
pages = "1552--1562",
journal = "IEEE Transactions on Speech and Audio Processing",
issn = "1558-7916",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "10",

}

Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps. / Wang, Jia Ching; Chin, Yu Hao; Chen, Bo Wei; Lin, Chang Hong; Wu, Chung-Hsien.

In: IEEE Transactions on Audio, Speech and Language Processing, Vol. 23, No. 10, 7114224, 01.10.2015, p. 1552-1562.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps

AU - Wang, Jia Ching

AU - Chin, Yu Hao

AU - Chen, Bo Wei

AU - Lin, Chang Hong

AU - Wu, Chung-Hsien

PY - 2015/10/1

Y1 - 2015/10/1

N2 - This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.

AB - This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.

UR - http://www.scopus.com/inward/record.url?scp=84933508905&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84933508905&partnerID=8YFLogxK

U2 - 10.1109/TASLP.2015.2438535

DO - 10.1109/TASLP.2015.2438535

M3 - Article

VL - 23

SP - 1552

EP - 1562

JO - IEEE Transactions on Speech and Audio Processing

JF - IEEE Transactions on Speech and Audio Processing

SN - 1558-7916

IS - 10

M1 - 7114224

ER -