Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Abstract:

I'll talk about two loosely related ideas. First, I'll describe a simple way of learning a smart (prior-dependent) reinforcement learning algorithm using recurrent networks, which we call meta-RL. Second, I'll talk about experimental work in MEG where we found spontaneous reactivation of sequences of states in a non-spatial task. These things are related insomuch as meta-RL depends on incremental learning from a set of different tasks and needs experience to be randomized, which spontaneous reactivation could provide. More broadly, there's a lot to learn from the relationships between all your past experiences.