Analyzing the Requirements of Trust in the Adoption of Artificial Intelligence in Military Operation

Saliba, Sami, School of Engineering and Applied Science, University of Virginia
Moore, Hunter, EN-SIE, University of Virginia
Foley, Rider, University of Virginia
AI is increasingly being used in life-critical control problems, where decisions must be made quickly and accurately under uncertain conditions. From autonomous vehicles to medical diagnostics, and military applications, AI has the potential to improve efficiency and safety. However, a major challenge remains, trust. When AI operates as a black box, lacking transparency and explainability, users may be reluctant to rely on its recommendations, especially in high stakes scenarios. My project addresses this issue in the context of troop movement and minefield traversal, where AI enabled decision making must be both effective and trusted.
To address this, I developed an AI‑enabled routing system that optimized both human and machine decision making for navigating hazardous terrain. The system accounted for the strengths and weaknesses of different mine detection methods, considering environmental factors like visibility, time of day, and terrain conditions. By integrating explainability into the AI’s decision‑making process, my approach ensured that operators could validate its recommendations, making the system more reliable and usable in military settings.
However, trust in AI is not just a technical challenge; it is also a social one. Soldiers, commanders, and policymakers needed to be confident that AI aligned with mission objectives and enhanced decision making rather than replacing human judgment. Using the Social Construction of Technology (SCOT) framework, I analyzed how different groups perceived and shaped the adoption of AI in military settings. The factors influencing trust included system transparency, human oversight, and how well AI integrated into existing workflows.
For my STS research, I conducted a mixed‑methods analysis of the DEVCOM AI Trust Challenge, which evaluated AI‑driven military technologies. This included analyzing scoring criteria, expert feedback, and evaluation patterns to understand what design choices made AI more explainable and trustworthy. This helped identify patterns in how military stakeholders evaluated and prioritized trust in autonomous systems.
Findings showed that government judges prioritized explainability, oversight, and risk mitigation, while academia valued innovation and rigor. Industry emphasized practical implementation and scalability. These varying perspectives highlighted the need to balance innovation with transparency and oversight in AI system design.
Together, my capstone project and STS research offer a roadmap for creating AI‑enabled systems that are technically robust and socially accepted. These insights extended beyond minefield navigation, informing the broader development of trusted AI for high‑risk domains where human‑AI collaboration had to be both effective and accountable.
BS (Bachelor of Science)
Artificial Intelligence, Trust in AI, Reinforcement Learning
DEVCOM
English
All rights reserved (no additional license for public reuse)
2025/05/08