Search results
Found 12044 matches for
Digitally augmented, parent-led CBT versus treatment as usual for child anxiety problems in child mental health services in England and Northern Ireland: a pragmatic, non-inferiority, clinical effectiveness and cost-effectiveness randomised controlled trial.
BACKGROUND: Anxiety problems are common in children, yet few affected children access evidence-based treatment. Digitally augmented psychological therapies bring potential to increase availability of effective help for children with mental health problems. This study aimed to establish whether therapist-supported, digitally augmented, parent-led cognitive behavioural therapy (CBT) could increase the efficiency of treatment without compromising clinical effectiveness and acceptability. METHODS: We conducted a pragmatic, unblinded, two-arm, multisite, randomised controlled non-inferiority trial to evaluate the clinical effectiveness and cost-effectiveness of therapist-supported, parent-led CBT using the Online Support and Intervention (OSI) for child anxiety platform compared with treatment as usual for child (aged 5-12 years) anxiety problems in 34 Child and Adolescent Mental Health Services in England and Northern Ireland. We examined acceptability of OSI plus therapist support via qualitative interviews. Participants were randomly assigned (1:1) to OSI plus therapist support or treatment as usual, minimised by child age, gender, service type, and baseline child anxiety interference. Outcomes were assessed at week 14 and week 26 after randomisation. The primary clinical outcome was parent-reported interference caused by child anxiety at week 26 assessment, using the Child Anxiety Impact Scale-parent report (CAIS-P). The primary measure of health economic effect was quality-adjusted life-years (QALYs). Outcome analyses were conducted blind in the intention-to-treat (ITT) population with a standardised non-inferiority margin of 0·33 for clinical analyses. The trial was registered with ISRCTN, 12890382. FINDINGS: Between Dec 5, 2020, and Aug 3, 2022, 706 families (706 children and their parents or carers) were referred to the study information. 444 families were enrolled. Parents reported 255 (58%) child participants' gender to be female, 184 (41%) male, three (<1%) other, and one (<1%) preferred not to report their child's gender. 400 (90%) children were White and the mean age was 9·20 years (SD 1·79). 85% of families for whom clinicians provided information in the treatment as usual group received CBT. OSI plus therapist support was non-inferior for parent-reported anxiety interference on the CAIS-P (SMD 0·01, 95% CI -0·15 to 0·17; p<0·0001) and all secondary outcomes. The mean difference in QALYs across trial arms approximated to zero, and OSI plus therapist support was associated with lower costs than treatment as usual. OSI plus therapist support was likely to be cost effective under certain scenarios, but uncertainty was high. OSI plus therapist support acceptability was good. No serious adverse events were reported. INTERPRETATION: Digitally augmented intervention brought promising savings without compromising outcomes and as such presents a valuable tool for increasing access to psychological therapies and meeting the demand for treatment of child anxiety problems. FUNDING: Department for Health and Social Care and United Kingdom Research and Innovation Research Grant, National Institute for Health and Care (NIHR) Research Policy Research Programme, Oxford and Thames Valley NIHR Applied Research Collaboration, Oxford Health NIHR Biomedical Research Centre.
Sleep and motor learning in stroke (SMiLES): a longitudinal study investigating sleep-dependent consolidation of motor sequence learning in the context of recovery after stroke.
INTRODUCTION: There is growing evidence that sleep is disrupted after stroke, with worse sleep relating to poorer motor outcomes. It is also widely acknowledged that consolidation of motor learning, a critical component of poststroke recovery, is sleep-dependent. However, whether the relationship between disrupted sleep and poor outcomes after stroke is related to direct interference of sleep-dependent motor consolidation processes, is currently unknown. Therefore, the aim of the present study is to understand whether measures of motor consolidation mediate the relationship between sleep and clinical motor outcomes post stroke. METHODS AND ANALYSIS: We will conduct a longitudinal observational study of up to 150 participants diagnosed with stroke affecting the upper limb. Participants will be recruited and assessed within 7 days of their stroke and followed up at approximately 1 and 6 months. The primary objective of the study is to determine whether sleep in the subacute phase of recovery explains the variability in upper limb motor outcomes after stroke (over and above predicted recovery potential from the Predict Recovery Potential algorithm) and whether this relationship is dependent on consolidation of motor learning. We will also test whether motor consolidation mediates the relationship between sleep and whole-body clinical motor outcomes, whether motor consolidation is associated with specific electrophysiological sleep signals and sleep alterations during subacute recovery. ETHICS AND DISSEMINATION: This trial has received both Health Research Authority, Health and Care Research Wales and National Research Ethics Service approval (IRAS: 304135; REC: 22/LO/0353). The results of this trial will help to enhance our understanding of the role of sleep in recovery of motor function after stroke and will be disseminated via presentations at scientific conferences, peer-reviewed publication, public engagement events, stakeholder organisations and other forms of media where appropriate. TRIAL REGISTRATION NUMBER: ClinicalTrials.gov: NCT05746260, registered on 27 February 2023.
Fatigue predicts quality of life after leucine-rich glioma-inactivated 1-antibody encephalitis.
Patient-reported quality-of-life (QoL) and carer impacts are not reported after leucine-rich glioma-inactivated 1-antibody encephalitis (LGI1-Ab-E). From 60 patients, 85% (51 out of 60) showed one abnormal score across QoL assessments and 11 multimodal validated questionnaires. Compared to the premorbid state, QoL significantly deteriorated (p
The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes
AbstractSelective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.
The neural dynamics underlying prioritisation of task-relevant information
The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each individual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/.
Untangling featural and conceptual object representations
How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long et al., 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system's capacity to use image features to resolve a recognisable object.
Mapping the dynamics of visual feature coding: Insights into perception and integration
The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations, a key missing piece is how the coding of visual features relates to our perception of the environment. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N = 16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every stimulus feature contributed to perceptual judgements, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, contribute to perception. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Overlapping neural representations for the position of visible and imagined objects
Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain ‘fills-in’ information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.
Decoding images in the mind’s eye: The temporal dynamics of visual imagery
Mental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results in the context of prior findings of mental imagery.