Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding

Jeffrey F. Cohn, Adena J. Zlochower, James Jenn-Jier Lien, Takeo Kanade

Research output: Contribution to journalArticle

170 Citations (Scopus)

Abstract

The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman and Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

Original languageEnglish
Pages (from-to)35-43
Number of pages9
JournalPsychophysiology
Volume36
Issue number1
DOIs
Publication statusPublished - 1999 Jan 1

Fingerprint

Mouth
Videotape Recording
Discriminant Analysis
Students

All Science Journal Classification (ASJC) codes

  • Neuroscience(all)
  • Neuropsychology and Physiological Psychology
  • Experimental and Cognitive Psychology
  • Endocrine and Autonomic Systems
  • Developmental Neuroscience
  • Cognitive Neuroscience
  • Biological Psychiatry

Cite this

Cohn, Jeffrey F. ; Zlochower, Adena J. ; Lien, James Jenn-Jier ; Kanade, Takeo. / Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding. In: Psychophysiology. 1999 ; Vol. 36, No. 1. pp. 35-43.
@article{4e173a5a7ea7455a9d1b66ab3742aaea,
title = "Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding",
abstract = "The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman and Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92{\%} or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91{\%}, 88{\%}, and 81{\%} for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.",
author = "Cohn, {Jeffrey F.} and Zlochower, {Adena J.} and Lien, {James Jenn-Jier} and Takeo Kanade",
year = "1999",
month = "1",
day = "1",
doi = "10.1017/S0048577299971184",
language = "English",
volume = "36",
pages = "35--43",
journal = "Psychophysiology",
issn = "0048-5772",
publisher = "Wiley-Blackwell",
number = "1",

}

Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding. / Cohn, Jeffrey F.; Zlochower, Adena J.; Lien, James Jenn-Jier; Kanade, Takeo.

In: Psychophysiology, Vol. 36, No. 1, 01.01.1999, p. 35-43.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding

AU - Cohn, Jeffrey F.

AU - Zlochower, Adena J.

AU - Lien, James Jenn-Jier

AU - Kanade, Takeo

PY - 1999/1/1

Y1 - 1999/1/1

N2 - The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman and Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

AB - The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman and Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

UR - http://www.scopus.com/inward/record.url?scp=0032924381&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0032924381&partnerID=8YFLogxK

U2 - 10.1017/S0048577299971184

DO - 10.1017/S0048577299971184

M3 - Article

C2 - 10098378

AN - SCOPUS:0032924381

VL - 36

SP - 35

EP - 43

JO - Psychophysiology

JF - Psychophysiology

SN - 0048-5772

IS - 1

ER -