Improving Explainable AI (XAI) Integration in High-Stakes Environments: Examining the Effect of Workload, Task Priority, and Training on XAI Use

Alami, Abdul Jawad, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Riggs, Sara, EN-SIE, University of Virginia
The rapid evolution of Artificial Intelligence (AI) technologies presents both opportunities and challenges for decision-making in high-stakes environments. This dissertation explores the potential of Explainable AI (XAI) based decision support systems in high-stakes applications such as healthcare and military operations. Through an observational case study and two controlled experiments, we investigate the impact of environmental factors on XAI use, the role of training in promoting effective human-AI teaming, and the challenges associated with integrating XAI systems into professional workflows.
Our research reveals that high workload and low task priority significantly impair task performance and the proper use of XAI systems. We demonstrate that training plays a crucial role in promoting appropriate use of XAI and reducing blind reliance on AI recommendations. Notably, we find that real-time XAI-based training can be as effective as traditional guided training in improving performance and calibrating trust in AI systems.
The dissertation provides critical insights into XAI system integration challenges and opportunities in high-stakes environments. We propose design considerations for XAI systems, including adaptivity to workload and task priority. Our findings contribute to the theoretical understanding of trust calibration in XAI systems and highlight the potential of XAI-based training methods. As AI systems become increasingly prevalent in critical decision-making processes, this research contributes to ensuring effective human-AI collaboration for enhanced safety, efficiency, and ethical considerations.
PHD (Doctor of Philosophy)
Explainable AI (XAI), Human–AI collaboration, High-stakes environments, Automation bias, User-centered design, Human factors, Human–computer interaction
English
All rights reserved (no additional license for public reuse)
2024/08/01