Neuroscience Seminar: Discovering and exploiting structure in the face of changing tasks
Chris Lucas (University of Edinburgh)
Tuesday, 04 April 2017, 1pm to 2pm
How do human beings learn and make decisions in the face of sparse, noisy, and ambiguous evidence? One approach to answering this question is to consider it at Marr's "computational level", and compare human behavior to that of rational or ideal agents with a specific set of a priori assumptions or inductive biases. Some of these biases might be innate, and others might be distilled from past experience, e.g., as captured by hierarchical Bayesian models. While research taking this approach has seen many successes, it tends to assume that the challenges a learner faces, and their relationships to the learner's previous experiences, have a known structure that is stable over time. I will argue that this assumption leads us to neglect the remarkable human talent for switching between problem contexts and transferring knowledge between problems that are analogous or related, but not identical. I will describe some experimental results and computational models from my lab, which examine the human ability to discover abstract structure and use discovered structure to improve performance in subsequent problems, even when the underlying tasks are related to one another in unknown ways, or when the structure is subject to change across repeated tasks. The specific experiments include (1) a reinforcement learning task in which participants must simultaneously learn about the underlying reward pattern and maximize their immediate rewards, and (2) a function estimation task in which participants must learn the structural commonalities across different functions and use those commonalities to make sense of extremely sparse data. Time permitting, I will also discuss connections to bounded rationality, heuristics, and process-level models.
Host: Chris Summerfield