Trusted Artificial Intelligence in Life-Critical and High-Risk Environments; Framing Trust: A SCOT Analysis of Military AI and Public Perception in the U.S.

Author:
Palmer, Hannah, School of Engineering and Applied Science, University of Virginia
Advisors:
Moore, Hunter, EN-SIE, University of Virginia
Francisco, Pedro Augusto, EN-Engineering and Society, University of Virginia
Burkett, Matthew, EN-SIE, University of Virginia
Scherer, William, EN-SIE, University of Virginia
Abstract:

As artificial intelligence (AI) increasingly influences high-stakes environments like military operations, the question of trust has become more critical than ever. My Capstone project develops a systems engineering framework designed to build trust in AI-enabled systems. This research aims to explore how transparency and risk quantification can improve the reliability of AI in life-critical decision-making. In parallel, my STS paper investigates how different social groups define trust in AI within the context of U.S. defense. Focusing on the Department of Defense (DoD)’s Replicator Initiative, the paper addresses various viewpoints to demonstrate that trust in AI extends beyond technical reliability. Together, these two projects approach trust from technical and societal perspectives – one by creating measurable tools for trustworthy AI, and the other by analyzing how trust is socially constructed. The combined work emphasizes that AI must account for both algorithmic performance and public perception.
My Capstone project provides a generalizable systems engineering framework designed to build trust in AI-enabled systems through integration of explainable statistical metrics into traditionally black-box AI models. Two novel trust metrics are introduced: Accuracy Avoidance Ratio and Scanned Cell Confidence. The framework is built in an agent-based simulation that models realistic minefields with weather and terrain factors, compares human and AI image classifiers, uses reinforcement learning to guide UAV scanning, and dynamically routes ground vehicles and troops with an adaptive A* algorithm based on real-time confidence data. By measuring risk at each step, the framework transforms a fast but unclear system into a transparent, auditable tool that helps build operator trust in autonomous minefield navigation.
Four methods were evaluated: AI-only and human-only detectors, each tested with single and multi-path scan strategies. The integration of reinforcement learning (RL) proved especially effective. The RL agent achieved the best "mines per step" rate at 17.4%, compared to the worst-performing method, AI multi-path, at 21.63%. While the RL agent did not have the lowest average mine encounters, it demonstrated the lowest variance (0.30), suggesting more consistent performance. It also outperformed all baselines in trust metrics, with the lowest AAR (41.37%) and the highest SCC (39.6%), indicating it both avoided low-confidence areas and prioritized valuable scan targets. However, this came at a cost: the RL approach took roughly three times longer than the fastest baseline (AI single-path), reinforcing the tradeoff between speed and trust.
In parallel with the Capstone, my STS paper investigates the question: What are the narratives surrounding the use of AI in U.S. defense, and how do they define trust? This question is important because public narratives have historically shaped the trajectory of military technologies in the U.S. In a democracy where reliability and accountability are paramount, trust in AI becomes not merely a technical goal but a societal requirement. Using the Social Construction of Technology (SCOT) framework, this paper analyzes how different social groups – specifically military leaders and the general public – interpret AI’s role in defense. To narrow down the scope of AI, the paper focuses on the DoD’s Replicator Initiative and draws on various public statements.
This paper found that military professionals generally define trust in AI through efficiency and strategic advantage. In contrast, the public emphasizes transparency and dehumanizing warfare. While both groups support human oversight, their definitions of trust diverge: leaders prioritize operational success while the public demands ethical constraints. The paper concludes that trust in military AI is socially constructed and shaped by these divergent viewpoints. Greater transparency and ethical clarity may help bridge this divide and ensure AI development aligns with both defense goals and public values.

Degree:
BS (Bachelor of Science)
Keywords:
military, artificial intelligence, trust in AI, SCOT
Notes:

School of Engineering and Applied Science

Bachelor of Science in Systems Engineering

Technical Advisor: Hunter Moore, Matthew Burkett, William Scherer

STS Advisor: Pedro Francisco

Technical Team Members: Justin Abel, Stephen Durham, Andrew Evans, Sami Saliba

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2025/05/08