Attention & Cognitive Control Lab (Yeung Lab)
- AFIT PhD Student
Decision Confidence in Human-Machine Teaming
Human-machine teaming is critical in today's world. However, in contexts as diverse as political forecasting, school admissions, and even day-to-day GPS navigation, human operators have been shown to down-weight information provided by intelligent machines such as expert systems and algorithm-based predictions. The causes and prevalence of this algorithm aversion remain poorly understood. Many ideas have been proposed but lack empirical support. The core motivating assumption of my research is that people's trust in algorithms follows the principles of trust that underpin human social interaction. As such, insensitivity to mechanisms of interpersonal trust will undermine the effectiveness of human-machine teaming.
My research aims to identify causes of algorithm aversion by leveraging insights from research on trust and influence in human social and group decision making. Specifically, emerging evidence indicates that effective communication of decision confidence is critical to the evolution of trust and the exercise of influence in group decisions.
EEG and fMRI studies will provide crucial insights into the mechanisms of trust and influence formation. EEG pinpoints the locus of algorithm aversion in the decision making process, determining whether it primarily affects the attention paid to advice from human versus algorithmic sources, or rather affects the use of this advice in adapting behaviour.
fMRI studies will pinpoint the underlying neural mechanisms of algorithm aversion. Procesesing of the valence and impact of external information depends on a well-characterised network that includes regions in medial prefrontal cortex and basal ganglia. Evidence suggests that human versus non-social sources of information may be processed in separable regions in medial prefrontal cortex but are combined in a common decision making pathway.
This research will leverage the aforementioned human-based insights to test that algorithm aversion arises through an early separation in processing, but can be mitigated through the addition of confidence cues that result in human versus algorithmic advice being dealt with by common neural mechanisms.