Brain & Cognition Lab (Nobre Lab)
I am a DPhil student supervised by Kia Nobre, and I am funded by the Medical Sciences Graduate School Studentship (split between Clarendon Fund Scholarship and the Somerville College Mary Somerville Graduate Scholarhip).
My general research interests include understanding theories of perception and attention. More specifically, my research explores how theories of predictive coding fit into more complex tasks and stimuli. I use EEG, EyeTracking, and behavioral methods to answer these questions.
Currently, I am working to understand how predictions may be modulated by perceptual difficulty, as well as how different tasks modulate predictions.
This research is important as it will complement existing theories of perception and attention. This work will contribute to the understanding of how humans we behave in a robust and complex world.
I graduated from the University of Delaware where I worked in James Hoffman's Visual Cognition Group. Following my bachelor studies I went on to work in Jeremy Wolfe's Visual Attention Lab at Harvard Medical School and Brigham and Women’s Hospital as a research assistant, before completing my Masters in Interdisciplinary Neuroscience at Goethe University of Frankfurt where I worked with Melissa Võ.
I am involved in the Graduate Joint Consultative Committee, and the Experimental Psychology Peer Mentoring Program.
Output planning at the input stage in visual working memory.
Boettcher SEP. et al, (2021), Sci Adv, 7
Functional biases in attentional templates from associative memory.
Boettcher SEP. et al, (2020), J Vis, 20
One Thing Leads to Another: Anticipating Visual Object Identity Based on Associative-Memory Templates.
Boettcher SEP. et al, (2020), J Neurosci, 40, 4010 - 4020
Reading scenes: how scene grammar guides attention and aids perception in real-world environments.
Võ ML-H. et al, (2019), Curr Opin Psychol, 29, 205 - 210
Lost in the supermarket: Quantifying the cost of partitioning memory sets in hybrid search.
Boettcher SEP. et al, (2018), Mem Cognit, 46, 43 - 57