Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Error monitoring is crucial for inferring how controllable an environment is, and thus for estimating the value of control processes (metacontrol). In this study, we use computational simulations with deep neural networks to investigate its behavioral and neural correlates. We trained both humans and deep reinforcement learning (RL) agents to perform a reward-guided learning task that required adaptation to changes in action controllability. Deep RL agents could only solve the task when designed to explicitly predict action prediction errors that fire in the medial prefrontal cortex. When trained this way, they displayed signatures of metacontrol that closely resembled those observed in humans. Moreover, when deep RL agents were trained to over- or underestimate controllability, they developed behavioral pathologies partially matching those of humans who reported depressive, anxious, or compulsive traits on transdiagnostic questionnaires. These findings open up avenues for studying metacontrol using deep neural networks.

More information Original publication

DOI

10.1073/pnas.2510334123

Type

Journal article

Publication Date

2026-03-03T00:00:00+00:00

Volume

123

Keywords

cognitive control, error monitoring, metacontrol, psychopathology, reinforcement learning, Humans, Deep Learning, Neural Networks, Computer, Prefrontal Cortex, Reinforcement, Psychology, Reward, Computer Simulation, Male