Abstract
This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.
Original language | English |
---|---|
Article number | 7114224 |
Pages (from-to) | 1552-1562 |
Number of pages | 11 |
Journal | IEEE Transactions on Audio, Speech and Language Processing |
Volume | 23 |
Issue number | 10 |
DOIs | |
Publication status | Published - 2015 Oct 1 |
All Science Journal Classification (ASJC) codes
- Acoustics and Ultrasonics
- Electrical and Electronic Engineering