Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

Imitation of facial gestures requires the cognitive system to equate the seen-but-unfelt with the felt-but-unseen. Rival accounts propose that this "correspondence problem" is solved either by an innate supramodal mechanism (the active intermodal-mapping, or AIM, model) or by learned, direct links between the corresponding visual and proprioceptive representations of actions (the associative sequence-learning, or ASL, model). Two experiments tested these alternative models using a new technology that permits, for the first time, the automated objective measurement of imitative accuracy. Euclidean distances, measured in image-derived principal component space, were used to quantify the accuracy of adult participants' attempts to replicate their own facial expressions before, during, and after training. Results supported the ASL model. In Experiment 1, participants reliant solely on proprioceptive feedback got progressively worse at self-imitation. In Experiment 2, participants who received visual feedback that did not match their execution of facial gestures also failed to improve. However, in both experiments, groups that received visual feedback contingent on their execution of facial gestures showed progressive improvement.

Original publication




Journal article


Psychol Sci

Publication Date





93 - 98


Adult, Association, Body Image, Facial Expression, Feedback, Psychological, Female, Humans, Imitative Behavior, Male, Models, Psychological, Practice (Psychology), Principal Component Analysis, Proprioception, Self Concept, Visual Perception