TY - GEN
T1 - BFI-based speaker personality perception using acoustic-prosodic features
AU - Liu, Chia Jui
AU - Wu, Chung Hsien
AU - Chiu, Yu Hsien
PY - 2013
Y1 - 2013
N2 - This paper presents an approach to automatic prediction of the traits the listeners attribute to a speaker they never heard before. In previous research, the Big Five Inventory (BFI), one of the most widely used questionnaires, is adopted for personality assessment. Based on the BFI, in this study, an artificial neural network (ANN) is adopted to project the input speech segment to the BFI space based on acoustic-prosodic features. Personality trait is then predicted by estimating the BFI scores obtained from the ANN. For performance evaluation, the BFI with two versions (one is a complete questionnaire and the other is a simplified version) were adopted. The experiments were performed over a corpus of 535 speech samples assessed in terms of personality traits by experienced subjects. The results show that the proposed method for predicting the trait is efficient and effective and the prediction accuracy rate can achieve 70%.
AB - This paper presents an approach to automatic prediction of the traits the listeners attribute to a speaker they never heard before. In previous research, the Big Five Inventory (BFI), one of the most widely used questionnaires, is adopted for personality assessment. Based on the BFI, in this study, an artificial neural network (ANN) is adopted to project the input speech segment to the BFI space based on acoustic-prosodic features. Personality trait is then predicted by estimating the BFI scores obtained from the ANN. For performance evaluation, the BFI with two versions (one is a complete questionnaire and the other is a simplified version) were adopted. The experiments were performed over a corpus of 535 speech samples assessed in terms of personality traits by experienced subjects. The results show that the proposed method for predicting the trait is efficient and effective and the prediction accuracy rate can achieve 70%.
UR - http://www.scopus.com/inward/record.url?scp=84893265207&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893265207&partnerID=8YFLogxK
U2 - 10.1109/APSIPA.2013.6694234
DO - 10.1109/APSIPA.2013.6694234
M3 - Conference contribution
AN - SCOPUS:84893265207
SN - 9789869000604
T3 - 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2013
BT - 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2013
T2 - 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2013
Y2 - 29 October 2013 through 1 November 2013
ER -