Search results
Found 12849 matches for
Estimating power in (generalized) linear mixed models: An open introduction and tutorial in R.
Mixed-effects models are a powerful tool for modeling fixed and random effects simultaneously, but do not offer a feasible analytic solution for estimating the probability that a test correctly rejects the null hypothesis. Being able to estimate this probability, however, is critical for sample size planning, as power is closely linked to the reliability and replicability of empirical findings. A flexible and very intuitive alternative to analytic power solutions are simulation-based power analyses. Although various tools for conducting simulation-based power analyses for mixed-effects models are available, there is lack of guidance on how to appropriately use them. In this tutorial, we discuss how to estimate power for mixed-effects models in different use cases: first, how to use models that were fit on available (e.g. published) data to determine sample size; second, how to determine the number of stimuli required for sufficient power; and finally, how to conduct sample size planning without available data. Our examples cover both linear and generalized linear models and we provide code and resources for performing simulation-based power analyses on openly accessible data sets. The present work therefore helps researchers to navigate sound research design when using mixed-effects models, by summarizing resources, collating available knowledge, providing solutions and tools, and applying them to real-world problems in sample sizing planning when sophisticated analysis procedures like mixed-effects models are outlined as inferential procedures.
No evidence from MVPA for different processes underlying the N300 and N400 incongruity effects in object-scene processing.
Attributing meaning to diverse visual input is a core feature of human cognition. Violating environmental expectations (e.g., a toothbrush in the fridge) induces a late event-related negativity of the event-related potential/ERP. This N400 ERP has not only been linked to the semantic processing of language, but also to objects and scenes. Inconsistent object-scene relationships are additionally associated with an earlier negative deflection of the EEG signal between 250 and 350 ms. This N300 is hypothesized to reflect pre-semantic perceptual processes. To investigate whether these two components are truly separable or if the early object-scene integration activity (250-350 ms) shares certain levels of processing with the late neural correlates of meaning processing (350-500 ms), we used time-resolved multivariate pattern analysis (MVPA) where a classifier trained at one time point in a trial (e.g., during the N300 time window) is tested at every other time point (i.e., including the N400 time window). Forty participants were presented with semantic inconsistencies, in which an object was inconsistent with a scene's meaning. Replicating previous findings, our manipulation produced significant N300 and N400 deflections. MVPA revealed above chance decoding performance for classifiers trained during time points of the N300 component and tested during later time points of the N400, and vice versa. This provides no evidence for the activation of two separable neurocognitive processes following the violation of context-dependent predictions in visual scene perception. Our data supports the early appearance of high-level, context-sensitive processes in visual cognition.
Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior.
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Gist in time: Scene semantics and structure enhance recall of searched objects.
Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization.
Of "what" and "where" in a natural search task: Active object handling supports object location memory beyond the object's identity.
Looking for as well as actively manipulating objects that are relevant to ongoing behavioral goals are intricate parts of natural behavior. It is, however, not clear to what degree these two forms of interaction with our visual environment differ with regard to their memory representations. In a real-world paradigm, we investigated if physically engaging with objects as part of a search task influences identity and position memory differently for task-relevant versus irrelevant objects. Participants equipped with a mobile eye tracker either searched for cued objects without object interaction (Find condition) or actively collected the objects they found (Handle condition). In the following free-recall task, identity memory was assessed, demonstrating superior memory for relevant compared to irrelevant objects, but no difference between the Handle and Find conditions. Subsequently, location memory was inferred via times to first fixation in a final object search task. Active object manipulation and task-relevance interacted in that location memory for relevant objects was superior to irrelevant ones only in the Handle condition. Including previous object recall performance as a covariate in the linear mixed-model analysis of times to first fixation allowed us to explore the interaction between remembered/forgotten object identities and the execution of location memory. Identity memory performance predicted location memory in the Find but not the Handle condition, suggesting that active object handling leads to strong spatial representations independent of object identity memory. We argue that object handling facilitates the prioritization of relevant location information, but this might come at the cost of deprioritizing irrelevant information.
Cluster-based permutation tests of MEG/EEG data do not establish significance of effect latency or location.
Cluster-based permutation tests are gaining an almost universal acceptance as inferential procedures in cognitive neuroscience. They elegantly handle the multiple comparisons problem in high-dimensional magnetoencephalographic and EEG data. Unfortunately, the power of this procedure comes hand in hand with the allure for unwarranted interpretations of the inferential output, the most prominent of which is the overestimation of the temporal, spatial, and frequency precision of statistical claims. This leads researchers to statements about the onset or offset of a certain effect that is not supported by the permutation test. In this article, we outline problems and common pitfalls of using and interpreting cluster-based permutation tests. We illustrate these with simulated data in order to promote a more intuitive understanding of the method. We hope that raising awareness about these issues will be beneficial to common scientific practices, while at the same time increasing the popularity of cluster-based permutation procedures.
Keeping it real: Looking beyond capacity limits in visual cognition.
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
The lower bounds of massive memory: Investigating memory for object details after incidental encoding.
Visual long-term memory capacity appears massive and detailed when probed explicitly. In the real world, however, memories are usually built from chance encounters. Therefore, we investigated the capacity and detail of incidental memory in a novel encoding task, instructing participants to detect visually distorted objects among intact objects. In a subsequent surprise recognition memory test, lures of a novel category, another exemplar, the same object in a different state, or exactly the same object were presented. Lure recognition performance was above chance, suggesting that incidental encoding resulted in reliable memory formation. Critically, performance for state lures was worse than for exemplars, which was driven by a greater similarity of state as opposed to exemplar foils to the original objects. Our results indicate that incidentally generated visual long-term memory representations of isolated objects are more limited in detail than recently suggested.
Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments.
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Moving foraging into three dimensions: Feature- versus conjunction-based foraging in virtual reality.
Visual attention evolved in a three-dimensional (3D) world, yet studies on human attention in three dimensions are sparse. Here we present findings from a human foraging study in immersive 3D virtual reality. We used a foraging task introduced in Kristjánsson et al. to examine how well their findings generalise to more naturalistic settings. The second goal was to examine what effect the motion of targets and distractors has on inter-target times (ITTs), run patterns, and foraging organisation. Observers foraged for 50 targets among 50 distractors in four different conditions. Targets were distinguished from distractors by either a single feature (feature foraging) or a conjunction of features (conjunction foraging). Furthermore, those conditions were performed both with static and moving targets and distractors. Our results replicate previous foraging studies in many aspects, with constant ITTs during a "cruise-phase" within foraging trials and response time peaks at the end of foraging trials. Some key differences emerged, however, such as more frequent switches between target types during conjunction foraging than previously seen and a lack of clear mid-peaks during conjunction foraging, possibly reflecting that differences between feature and conjunction processing are smaller within 3D environments. Observers initiated their foraging in the bottom part of the visual field and motion did not have much of an effect on selection times between different targets (ITTs) or run behaviour patterns except for the end-peaks. Our results cast new light upon visual attention in 3D environments and highlight how 3D virtual reality studies can provide important extensions to two-dimensional studies of visual attention.
Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search.
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Building, Hosting and Recruiting: A Brief Introduction to Running Behavioral Experiments Online.
Researchers have ample reasons to take their experimental studies out of the lab and into the online wilderness. For some, it is out of necessity, due to an unforeseen laboratory closure or difficulties in recruiting on-site participants. Others want to benefit from the large and diverse online population. However, the transition from in-lab to online data acquisition is not trivial and might seem overwhelming at first. To facilitate this transition, we present an overview of actively maintained solutions for the critical components of successful online data acquisition: creating, hosting and recruiting. Our aim is to provide a brief introductory resource and discuss important considerations for researchers who are taking their first steps towards online experimentation.
Seek and you shall remember: scene semantics interact with visual search to build better memories.
Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.
Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality.
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search.
The arrangement of the contents of real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is the role different types of objects hold in a scene. In the current work, we seek to unveil an important building block of scenes-anchor objects. Anchors hold specific spatial predictions regarding the likely position of other objects in an environment. In a series of three eye tracking experiments we tested what role anchor objects occupy during visual search. In all of the experiments, participants searched through scenes for an object that was cued in the beginning of each trial. Critically, in half of the scenes a target relevant anchor was swapped for an irrelevant, albeit semantically consistent, object. We found that relevant anchor objects can guide visual search leading to faster reaction times, less scene coverage, and less time between fixating the anchor and the target. The choice of anchor objects was confirmed through an independent large image database, which allowed us to identify key attributes of anchors. Anchor objects seem to play a unique role in the spatial layout of scenes and need to be considered for understanding the efficiency of visual search in realistic stimuli.
The role of scene summary statistics in object recognition.
Objects that are semantically related to the visual scene context are typically better recognized than unrelated objects. While context effects on object recognition are well studied, the question which particular visual information of an object's surroundings modulates its semantic processing is still unresolved. Typically, one would expect contextual influences to arise from high-level, semantic components of a scene but what if even low-level features could modulate object processing? Here, we generated seemingly meaningless textures of real-world scenes, which preserved similar summary statistics but discarded spatial layout information. In Experiment 1, participants categorized such textures better than colour controls that lacked higher-order scene statistics while original scenes resulted in the highest performance. In Experiment 2, participants recognized briefly presented consistent objects on scenes significantly better than inconsistent objects, whereas on textures, consistent objects were recognized only slightly more accurately. In Experiment 3, we recorded event-related potentials and observed a pronounced mid-central negativity in the N300/N400 time windows for inconsistent relative to consistent objects on scenes. Critically, inconsistent objects on textures also triggered N300/N400 effects with a comparable time course, though less pronounced. Our results suggest that a scene's low-level features contribute to the effective processing of objects in complex real-world environments.
When Natural Behavior Engages Working Memory.
Working memory (WM) enables temporary storage and manipulation of information,1 supporting tasks that require bridging between perception and subsequent behavior. Its properties, such as its capacity, have been thoroughly investigated in highly controlled laboratory tasks.1-8 Much less is known about the utilization and properties of WM in natural behavior,9-11 when reliance on WM emerges as a natural consequence of interactions with the environment. We measured the trade-off between reliance on WM and gathering information externally during immersive behavior in an adapted object-copying task.12 By manipulating the locomotive demands required for task completion, we could investigate whether and how WM utilization changed as gathering information from the environment became more effortful. Reliance on WM was lower than WM capacity measures in typical laboratory tasks. A clear trade-off also occurred. As sampling information from the environment required increasing locomotion and time investment, participants relied more on their WM representations. This reliance on WM increased in a shallow and linear fashion and was associated with longer encoding durations. Participants' avoidance of WM usage showcases a fundamental dependence on external information during ecological behavior, even if the potentially storable information is well within the capacity of the cognitive system. These foundational findings highlight the importance of using immersive tasks to understand how cognitive processes unfold within natural behavior. Our novel VR approach effectively combines the ecological validity, experimental rigor, and sensitive measures required to investigate the interplay between memory and perception in immersive behavior. VIDEO ABSTRACT.