TY - GEN
T1 - Subtly different facial expression recognition and expression intensity estimation
AU - Lien, James Jenn-Jier
AU - Cohn, Jeffrey F.
AU - Kanade, Takeo
AU - Li, Ching Chung
PY - 1998/12/1
Y1 - 1998/12/1
N2 - We have developed a computer vision system, including both facial feature extraction and recognition, that automatically discriminates among subtly different facial expressions. Expression classification is based on Facial Action Coding System (FACS) action units (AUs), and discrimination is performed using Hidden Markov Models (HMMs). Three methods are developed to extract facial expression information for automatic recognition. The first method is facial feature point tracking using a coarse-to-fine pyramid method. This method is sensitive to subtle feature motion and is capable of handling large displacements with sub-pixel accuracy. The second method is dense flow tracking together with principal component analysis (PCA), where the entire facial motion information per frame is compressed to a low-dimensional weight vector. The third method is high gradient component (i.e., furrow) analysis in the spatio-temporal domain, which exploits the transient variation associated with the facial expression. Upon extraction of the facial information, non-rigid facial expression is separated from the rigid head motion component, and the face images are automatically aligned and normalized using an affine transformation. This system also provides expression intensity estimation, which has significant effect on the actual meaning of the expression.
AB - We have developed a computer vision system, including both facial feature extraction and recognition, that automatically discriminates among subtly different facial expressions. Expression classification is based on Facial Action Coding System (FACS) action units (AUs), and discrimination is performed using Hidden Markov Models (HMMs). Three methods are developed to extract facial expression information for automatic recognition. The first method is facial feature point tracking using a coarse-to-fine pyramid method. This method is sensitive to subtle feature motion and is capable of handling large displacements with sub-pixel accuracy. The second method is dense flow tracking together with principal component analysis (PCA), where the entire facial motion information per frame is compressed to a low-dimensional weight vector. The third method is high gradient component (i.e., furrow) analysis in the spatio-temporal domain, which exploits the transient variation associated with the facial expression. Upon extraction of the facial information, non-rigid facial expression is separated from the rigid head motion component, and the face images are automatically aligned and normalized using an affine transformation. This system also provides expression intensity estimation, which has significant effect on the actual meaning of the expression.
UR - http://www.scopus.com/inward/record.url?scp=0032307508&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0032307508&partnerID=8YFLogxK
U2 - 10.1109/CVPR.1998.698704
DO - 10.1109/CVPR.1998.698704
M3 - Conference contribution
AN - SCOPUS:0032307508
SN - 0818684976
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 853
EP - 859
BT - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
T2 - Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Y2 - 23 June 1998 through 25 June 1998
ER -