Search results
Found 12143 matches for
A new method for mapping spatial resolution in compound eyes suggests two visual streaks in fiddler crabs.
Visual systems play a vital role in guiding the behaviour of animals. Understanding the visual information animals are able to acquire is therefore key to understanding their visually mediated decision making. Compound eyes, the dominant eye type in arthropods, are inherently low-resolution structures. Their ability to resolve spatial detail depends on sampling resolution (interommatidial angle) and the quality of ommatidial optics. Current techniques for estimating interommatidial angles are difficult, and generally require in vivo measurements. Here, we present a new method for estimating interommatidial angles based on the detailed analysis of 3D micro-computed tomography images of fixed samples. Using custom-made MATLAB software, we determined the optical axes of individual ommatidia and projected these axes into the 3D space around the animal. The combined viewing directions of all ommatidia, estimated from geometrical optics, allowed us to estimate interommatidial angles and map the animal's sampling resolution across its entire visual field. The resulting topographic representations of visual acuity match very closely the previously published data obtained from both fiddler and grapsid crabs. However, the new method provides additional detail that was not previously detectable and reveals that fiddler crabs, rather than having a single horizontal visual streak as is common in flat-world inhabitants, probably have two parallel streaks located just above and below the visual horizon. A key advantage of our approach is that it can be used on appropriately preserved specimens, allowing the technique to be applied to animals such as deep-sea crustaceans that are inaccessible or unsuitable for in vivo approaches.
Properties of neuronal facilitation that improve target tracking in natural pursuit simulations
Although flying insects have limited visual acuity (approx. 18) and relatively small brains, many species pursue tiny targets against cluttered backgrounds with high success. Our previous computational model, inspired by electrophysiological recordings from insect 'small target motion detector' (STMD) neurons, did not account for several key properties described from the biological system. These include the recent observations of response 'facilitation' (a slow build-up of response to targets that move on long, continuous trajectories) and 'selective attention', a competitive mechanism that selects one target from alternatives. Here, we present an elaborated STMD-inspired model, implemented in a closed loop target-tracking system that uses an active saccadic gaze fixation strategy inspired by insect pursuit. We test this system against heavily cluttered natural scenes. Inclusion of facilitation not only substantially improves success for even short-duration pursuits, but it also enhances the ability to 'attend' to one target in the presence of distracters. Our model predicts optimal facilitation parameters that are static in space and dynamic in time, changing with respect to the amount of background clutter and the intended purpose of the pursuit. Our results provide insights into insect neurophysiology and show the potential of this algorithm for implementation in artificial visual systems and robotic applications.
Fiddler crabs are unique in timing their escape responses based on speed-dependent visual cues.
Predation risk imposes strong selection pressures on visual systems to quickly and accurately identify the position and movement of potential predators.1,2 Many invertebrates and other small animals, however, have limited capacity for distance perception due to their low spatial resolution and closely situated eyes.3,4 Consequently, they often rely on simplified decision criteria, essentially heuristics or "rules of thumb", to make decisions. The visual cues animals use to make escape decisions are surprisingly consistent, especially among arthropods, with the timing of escape commonly triggered by size-dependent visual cues such as angular size or angular size increment.5,6,7,8,9,10 Angular size, however, confuses predator size and distance and provides no information about the speed of the attack. Here, we show that fiddler crabs (Gelasimus dampieri) are unique among the arthropods studied to date as they timed their escape response based on the speed of an object's angular expansion. The crabs responded reliably by running away from visual stimuli that expanded at approximately 1.7 degrees/s, irrespective of stimulus size, speed, or its initial distance from the crabs. Though the threshold expansion speed was consistent across different stimulus conditions, we found that the escape timing was modulated by the elevation at which the stimulus approached, suggesting that other risk factors can bias the expansion speed threshold. The results suggest that the visual escape cues used by arthropods are less conserved than previously thought and that lifestyle and environment are significant drivers determining the escape cues used by different species.
Photoreceptors and diurnal variation in spectral sensitivity in the fiddler crab Gelasimus dampieri.
Colour signals, and the ability to detect them, are important for many animals and can be vital to their survival and fitness. Fiddler crabs use colour information to detect and recognise conspecifics, but their colour vision capabilities remain unclear. Many studies have attempted to measure their spectral sensitivity and identify contributing retinular cells, but the existing evidence is inconclusive. We used electroretinogram (ERG) measurements and intracellular recordings from retinular cells to estimate the spectral sensitivity of Gelasimus dampieri and to track diurnal changes in spectral sensitivity. G. dampieri has a broad spectral sensitivity and is most sensitive to wavelengths between 420 and 460 nm. Selective adaptation experiments uncovered an ultraviolet (UV) retinular cell with a peak sensitivity shorter than 360 nm. The species' spectral sensitivity above 400 nm is too broad to be fitted by a single visual pigment and using optical modelling, we provide evidence that at least two medium-wavelength sensitive (MWS) visual pigments are contained within a second blue-green sensitive retinular cell. We also found a ∼25 nm diurnal shift in spectral sensitivity towards longer wavelengths in the evening in both ERG and intracellular recordings. Whether the shift is caused by screening pigment migration or changes in opsin expression remains unclear, but the observation shows the diel dynamism of colour vision in this species. Together, these findings support the notion that G. dampieri possesses the minimum requirement for colour vision, with UV and blue/green receptors, and help to explain some of the inconsistent results of previous research.
A biologically inspired facilitation mechanism enhances the detection and pursuit of targets of varying contrast
Many species of flying insects detect and chase prey or conspecifics within a visually cluttered surround, e.g. for predation, territorial or mating behavior. We modeled such detection and pursuit for small moving targets, and tested it within a closed-loop, virtual reality flight arena. Our model is inspired directly by electrophysiological recordings from 'small target motion detector' (STMD) neurons in the insect brain that are likely to underlie this behavioral task. The front-end uses a variant of a biologically inspired 'elementary' small target motion detector (ESTMD), elaborated to detect targets in natural scenes of both contrast polarities (i.e. both dark and light targets). We also include an additional model for the recently identified physiological 'facilitation' mechanism believed to form the basis for selective attention in insect STMDs, and quantify the improvement this provides for pursuit success and target discriminability over a range of target contrasts.
Robustness and real-time performance of an insect inspired target tracking algorithm under natural conditions
Many computer vision tasks require the implementation of robust and efficient target tracking algorithms. Furthermore, in robotic applications these algorithms must perform whilst on a moving platform (ego motion). Despite the increase in computational processing power, many engineering algorithms are still challenged by real-Time applications. In contrast, lightweight and low-power flying insects, such as dragonflies, can readily chase prey and mates within cluttered natural environments, deftly selecting their target amidst distractors (swarms). In our laboratory, we record from 'target-detecting' neurons in the dragonfly brain that underlie this pursuit behavior. We recently developed a closed-loop target detection and tracking algorithm based on key properties of these neurons. Here we test our insect-inspired tracking model in open-loop against a set of naturalistic sequences and compare its efficacy and efficiency with other state-of-The-Art engineering models. In terms of tracking robustness, our model performs similarly to many of these trackers, yet is at least 3 times more efficient in terms of processing speed.
Performance assessment of an insect-inspired target tracking model in background clutter
Biological visual systems provide excellent examples of robust target detection and tracking mechanisms capable of performing in a wide range of environments. Consequently, they have been sources of inspiration for many artificial vision algorithms. However, testing the robustness of target detection and tracking algorithms is a challenging task due to the diversity of environments for applications of these algorithms. Correlation between image quality metrics and model performance is one way to deal with this problem. Previously we developed a target detection model inspired by physiology of insects and implemented it in a closed loop target tracking algorithm. In the current paper we vary the kinetics of a salience-enhancing element of our algorithm and test its effect on the robustness of our model against different natural images to find the relationship between model performance and background clutter.
Erratum to: Stage 1 registered report: metacognitive asymmetries in visual perception and Stage 2 registered report: metacognitive asymmetries in visual perception.
[This corrects the article DOI: 10.1093/nc/niab005.][This corrects the article DOI: 10.1093/nc/niab025.].
Paradoxical evidence weighting in confidence judgments for detection and discrimination.
When making discrimination decisions between two stimulus categories, subjective confidence judgments are more positively affected by evidence in support of a decision than negatively affected by evidence against it. Recent theoretical proposals suggest that this "positive evidence bias" may be due to observers adopting a detection-like strategy when rating their confidence-one that has functional benefits for metacognition in real-world settings where detectability and discriminability often go hand in hand. However, it is unknown whether, or how, this evidence-weighting asymmetry affects detection decisions about the presence or absence of a stimulus. In four experiments, we first successfully replicate a positive evidence bias in discrimination confidence. We then show that detection decisions and confidence ratings paradoxically suffer from an opposite "negative evidence bias" to negatively weigh evidence even when it is optimal to assign it a positive weight. We show that the two effects are uncorrelated and discuss our findings in relation to models that account for a positive evidence bias as emerging from a confidence-specific heuristic, and alternative models where decision and confidence are generated by the same, Bayes-rational process.
Dissociating the Neural Correlates of Subjective Visibility from Those of Decision Confidence.
A key goal of consciousness science is identifying neural signatures of being aware versus unaware of simple stimuli. This is often investigated in the context of near-threshold detection, with reports of stimulus awareness being linked to heightened activation in a frontoparietal network. However, because of reports of stimulus presence typically being associated with higher confidence than reports of stimulus absence, these results could be explained by frontoparietal regions encoding stimulus visibility, decision confidence, or both. In an exploratory analysis, we leverage fMRI data from 35 human participants (20 females) to disentangle these possibilities. We first show that, whereas stimulus identity was best decoded from the visual cortex, stimulus visibility (presence vs absence) was best decoded from prefrontal regions. To control for effects of confidence, we then selectively sampled trials before decoding to equalize confidence distributions between absence and presence responses. This analysis revealed striking differences in the neural correlates of subjective visibility in PFC ROIs, depending on whether or not differences in confidence were controlled for. We interpret our findings as highlighting the importance of controlling for metacognitive aspects of the decision process in the search for neural correlates of visual awareness.SIGNIFICANCE STATEMENT While much has been learned over the past two decades about the neural basis of visual awareness, the role of the PFC remains a topic of debate. By applying decoding analyses to functional brain imaging data, we show that prefrontal representations of subjective visibility are contaminated by neural correlates of decision confidence. We propose a new analysis method to control for these metacognitive aspects of awareness reports, and use it to reveal confidence-independent correlates of perceptual judgments in a subset of prefrontal areas.
A microfabricated fluorescence-activated cell sorter.
We have demonstrated a disposable microfabricated fluorescence-activated cell sorter (microFACS) for sorting various biological entities. Compared with conventional FACS machines, the microFACS provides higher sensitivity, no cross-contamination, and lower cost. We have used microFACS chips to obtain substantial enrichment of micron-sized fluorescent bead populations of differing colors. Furthermore, we have separated Escherichia coli cells expressing green fluorescent protein from a background of nonfluorescent E. coli cells and shown that the bacteria are viable after extraction from the sorting device. These sorters can function as stand-alone devices or as components of an integrated microanalytical chip.
Prokofiev was (almost) right: A cross-cultural investigation of auditory-conceptual associations in Peter and the Wolf.
Over recent decades, studies investigating cross-modal correspondences have documented the existence of a wide range of consistent cross-modal associations between simple auditory and visual stimuli or dimensions (e.g., pitch-lightness). Far fewer studies have investigated the association between complex and realistic auditory stimuli and visually presented concepts (e.g., musical excerpts-animals). Surprisingly, however, there is little evidence concerning the extent to which these associations are shared across cultures. To address this gap in the literature, two experiments using a set of stimuli based on Prokofiev's symphonic fairy tale Peter and the Wolf are reported. In Experiment 1, 293 participants from several countries and with very different language backgrounds rated the association between the musical excerpts, images and words representing the story's characters (namely, bird, duck, wolf, cat, and grandfather). The results revealed that participants tended to consistently associate the wolf and the bird with the corresponding musical excerpt, while the stimuli of other characters were not consistently matched across participants. Remarkably, neither the participants' cultural background, nor their musical expertise affected the ratings. In Experiment 2, 104 participants were invited to rate each stimulus on eight emotional features. The results revealed that the emotional profiles associated with the music and with the concept of the wolf and the bird were perceived as more consistent between observers than the emotional profiles associated with the music and the concept of the duck, the cat, and the grandpa. Taken together, these findings therefore suggest that certain auditory-conceptual associations are perceived consistently across cultures and may be mediated by emotional associations.
A microfabricated device for sizing and sorting DNA molecules.
We have demonstrated a microfabricated single-molecule DNA sizing device. This device does not depend on mobility to measure molecule size, is 100 times faster than pulsed-field gel electrophoresis, and has a resolution that improves with increasing DNA length. It also requires a million times less sample than pulsed-field gel electrophoresis and has comparable resolution for large molecules. Here we describe the fabrication and use of the single-molecule DNA sizing device for sizing and sorting DNA restriction digests and ladders spanning 2-200 kbp.
Assessing the visual appeal of real/AI-generated food images
A study designed to investigate the ability of individuals to differentiate between AI-generated and authentic food images, as well as the impact of disclosing this information on the consumer perception of the appeal of these images is reported. Two online experiments were conducted with real and AI-generated food images stretching across the unprocessed, processed, and ultra-processed food continuum. Study 1 was designed to assess the accuracy with which people could identify AI-generated food images while Study 2 explored how the disclosure of an image's origin influenced the appeal of the depicted food. The participants in Study 1 found it very easy to recognize the AI-generated images, particularly in the case of ultra-processed foods. Notably, without disclosure, the AI-generated images were often preferred. At the same time, however, disclosing that a food image was genuine significantly boosted its appeal, whereas the revelation that it had been generated by AI mitigated this effect. These insights help to understand consumer psychology in the rapidly-evolving digital food marketing landscape, highlighting the nuanced effects of technological advancements in AI image-generation on human perception.
Individual differences in sensitivity to taste-shape crossmodal correspondences
People generally associate curved and symmetrical shapes with sweetness, while associating angular and asymmetrical shapes with the other basic tastes (e.g., sour, bitter). However, these group-level taste-shape correspondences likely conceal important variation at an individual-level. We examined the extent to which individuals vary in their sensitivity to crossmodal correspondence between curvature and symmetry, on the one hand, and the five basic taste qualities (sweet, bitter, salty, sour, and umami), on the other. In Experiment 1, participants matched shapes (curved vs. angular, symmetrical vs. asymmetrical) and taste words. In Experiment 2, participants performed a similar task, though this time using actual tastants. Given that people differ in their hedonic experience of such shapes and tastes, we also measured participants’ liking for each taste and shape separately. The results replicate the general crossmodal correspondences between curved-sweet and symmetrical-sweet stimuli. Furthermore, participants tended to match sour and bitter tastes with angular and asymmetrical stimuli. However, these group-level taste-shape correspondences coexist alongside substantial variation at the level of the individual. While some participants consistently matched specific tastes with curved and symmetrical stimuli, others consistently matched these tastes with angular and asymmetrical stimuli, or else did not show these taste-shape correspondences. Liking for curved and symmetrical stimuli was higher than for angular and asymmetrical stimuli. However, participants also differed considerably in the extent to which these visual features affected their liking. Overall, our findings highlight the substantial individual differences that are associated with the degree to which people associate and like shapes and tastes.
An integrated microfabricated cell sorter.
We have developed an integrated microfabricated cell sorter using multilayer soft lithography. This integrated cell sorter is incorporated with various microfluidic functionalities, including peristaltic pumps, dampers, switch valves, and input and output wells, to perform cell sorting in a coordinated and automated fashion. The active volume of an actuated valve on this integrated cell sorter can be as small as 1 pL, and the volume of optical interrogation is approximately 100 fL. Different algorithms of cell manipulation, including cell trapping, were implemented in these devices. We have also demonstrated sorting and recovery of Escherichia coli cells on the chip.