Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.


Host: TBA


Abstract:

Computational models lie at the heart of science and there is little reason to think that cognitive science should be an exception. Indeed, computational models has found much use in cognitive science but are still more rarely found in the cognitive scientist’s toolbox than e.g. neuroimaging methods, that are much more complex and costly. Do computational cognitive models have yet to fullfill their potential? In this talk, I’ll discuss why that might be and present some recently developed models and emphasize the methods we employed in testing them. The Theory of Visual Attention (TVA) has mainly been applied to whole and partial report experiments. One common assumption when applying the TVA is that the rate of visual processing is constant or, in other words, that the psychometric function is exponential. This assumption is mathematically convenient and difficult to test but a brute-force method using more than 100,000 trials has given a unequivocal answer that could influence how TVA is used. The Early Maximum-Likelihood Estimation (MLE) model was developed to model audiovisual categorical perception. It has been applied to the sound-induced flash illusion and audiovisual speech perception. As other MLE models, it is based on an assumption of optimal integration but is special in that it assumes that integration occurs prior to categorization. Using cross-validation and regularization techniques it can be shown to provide meaningful predictions of the McGurk illusion.