Researchers use neural network modeling to establish why autistic people have altered facial expression recognition
For the first time, researchers at Tohoku University used neural network modeling to replicate the human brain on a computational system to better understand why patients with autism spectrum disorder (ASD) cannot adequately interpret facial expressions.
First publicized online in Scientific Reports, the study attempted to explore the unknown processes that result in an inability to read facial expressions among people with ASD.
“This study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in ASD from the perspective of predictive processing theory,” the authors explained in their journal article.
“Predictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase.”
“In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated.”
Based on the findings, researchers demonstrated how predictive processing theory could help clarify the recognition of facial expressions from neural network modeling.
“Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition,” the authors affirmed in the article.
“These results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.”