TY - JOUR
T1 - Augmented reality-based video-modeling storybook of nonverbal facial cues for children with autism spectrum disorder to improve their perceptions and judgments of facial expressions and emotions
AU - Chen, Chien Hsu
AU - Lee, I. Jui
AU - Lin, Ling Yi
N1 - Publisher Copyright:
© 2015 Elsevier Ltd. All rights reserved.
PY - 2016/2/1
Y1 - 2016/2/1
N2 - Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. We used an augmented reality (AR)-based video modeling (VM) storybook (ARVMS) to strengthen and attract the attention of children with ASD to nonverbal social cues because they have difficulty adjusting and switching their attentional focus. In this research, AR has multiple functions: it extends the social features of the story, but it also restricts attention to the most important parts of the videos. Evidence-based research shows that AR attracts the attention of children with ASD. However, few studies have combined AR with VM to train children with ASD to mimic facial expressions and emotions to improve their social skills. In addition, we used markerless natural tracking to teach the children to recognize patterns as they focused on the stable visual image printed in the storybook and then extended their attention to an animation of the story. After the three-phase (baseline, intervention, and maintenance) test data had been collected, the results showed that ARVMS intervention provided an augmented visual indicator which had effectively attracted and maintained the attention of children with ASD to nonverbal social cues and helped them better understand the facial expressions and emotions of the storybook characters.
AB - Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. We used an augmented reality (AR)-based video modeling (VM) storybook (ARVMS) to strengthen and attract the attention of children with ASD to nonverbal social cues because they have difficulty adjusting and switching their attentional focus. In this research, AR has multiple functions: it extends the social features of the story, but it also restricts attention to the most important parts of the videos. Evidence-based research shows that AR attracts the attention of children with ASD. However, few studies have combined AR with VM to train children with ASD to mimic facial expressions and emotions to improve their social skills. In addition, we used markerless natural tracking to teach the children to recognize patterns as they focused on the stable visual image printed in the storybook and then extended their attention to an animation of the story. After the three-phase (baseline, intervention, and maintenance) test data had been collected, the results showed that ARVMS intervention provided an augmented visual indicator which had effectively attracted and maintained the attention of children with ASD to nonverbal social cues and helped them better understand the facial expressions and emotions of the storybook characters.
UR - http://www.scopus.com/inward/record.url?scp=84944097392&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84944097392&partnerID=8YFLogxK
U2 - 10.1016/j.chb.2015.09.033
DO - 10.1016/j.chb.2015.09.033
M3 - Article
AN - SCOPUS:84944097392
SN - 0747-5632
VL - 55
SP - 477
EP - 485
JO - Computers in Human Behavior
JF - Computers in Human Behavior
ER -