Preliminary studies suggest there are differences in the facial expressions produced by autistic and non-autistic individuals. However, it is unclear what specifically is different, whether such differences remain after controlling for facial morphology and alexithymia, and whether production differences relate to perception differences. Therefore, we (1) comprehensively compared the spatiotemporal and kinematic properties of autistic and non-autistic expressions after controlling these factors, and (2) examined the contribution of production-related variables to emotion perception. We used facial motion capture to record 2448 cued and 2448 spoken expressions of anger, happiness, and sadness from autistic and matched non-autistic adults. Subsequently, we extracted the activation and jerkiness of numerous facial landmarks across time, generating over 265 million datapoints. Participants also completed an emotion recognition task. Autistic participants relied more on the mouth, and less on the eyebrows, to signal anger than their non-autistic peers. For happiness, autistic participants showed a less exaggerated smile that also did not "reach the eyes." For sadness, autistic participants tended to produce a downturned expression by raising their upper lip more than their non-autistic peers. Alexithymia predicted less differentiated angry and happy expressions. For non-autistic individuals, those who produced more precise spoken expressions had greater emotion recognition accuracy. No production-related factors contributed to autistic emotion recognition. This mismatch could explain why autistic people find it difficult to recognize non-autistic expressions, and vice versa; autistic and non-autistic faces may be essentially "speaking a different language" when conveying emotion.
Journal article
2026-01-18T00:00:00+00:00
alexithymia, autism, emotion, facial expression, social interaction