Coupled HMM-based multimodal fusion for mood disorder detection through elicited audio–visual signals

Tsung Hsien Yang, Chung Hsien Wu, Kun Yi Huang, Ming Hsiang Su

Research output: Contribution to journalArticle

12 Citations (Scopus)

Abstract

Mood disorders encompass a wide array of mood issues, including unipolar depression (UD) and bipolar disorder (BD). In diagnostic evaluation on the outpatients with mood disorder, a high percentage of BD patients are initially misdiagnosed as having UD. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. In this study, eliciting emotional videos are firstly used to elicit the patients’ emotions. After watching each video clips, their facial expressions and speech responses are collected when they are interviewing with a clinician. In mood disorder detection, the facial action unit (AU) profiles and speech emotion profiles (EPs) are obtained, respectively, by using the support vector machines (SVMs) which are built via facial features and speech features adapted from two selected databases using a denoising autoencoder-based method. Finally, a Coupled Hidden Markov Model (CHMM)-based fusion method is proposed to characterize the temporal information. The CHMM is modified to fuse the AUs and the EPs with respect to six emotional videos. Experimental results show the promising advantage and efficacy of the CHMM-based fusion approach for mood disorder detection.

Original languageEnglish
Pages (from-to)895-906
Number of pages12
JournalJournal of Ambient Intelligence and Humanized Computing
Volume8
Issue number6
DOIs
Publication statusPublished - 2017 Nov 1

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Fingerprint Dive into the research topics of 'Coupled HMM-based multimodal fusion for mood disorder detection through elicited audio–visual signals'. Together they form a unique fingerprint.

  • Cite this