Trusted Artificial Intelligence in Life-Critical and High-Risk Environments; Destabilized Trust: AI Misinformation, Military Integration, and Actor-Network Theory

Author:
Durham, Stephen, School of Engineering and Applied Science, University of Virginia
Advisors:
Moore, Hunter, EN-SIE, University of Virginia
Burkett, Matthew, EN_SIE, University of Virginia
Scherer, William, EN-SIE, University of Virginia
Laugelli, Benjamin, EN-Engineering and Society, University of Virginia
Abstract:

My technical work and STS research are connected primarily through the technology of artificial intelligence, specifically focusing on trust in these complex systems. Artificial intelligence is increasingly used in autonomous decision-making and is integrated into existing systems—making it central to both my technical project and STS research paper. In my technical project, my team and I constructed a network of human and non-human actors to build a resilient system that uses AI in a trustworthy manner for life-critical scenarios. To learn how successful technology networks form, my STS research examines the actor-network that OpenAI developed to achieve the goal of trusted artificial intelligence. Although the two projects approach AI differently, both follow the theme of trust when AI is embedded in complex systems.
My technical work focuses on ensuring trust in artificial intelligence within complex systems, particularly as AI becomes more integrated into high-risk environments. My capstone team developed a generalizable systems engineering framework for building trust in autonomous systems, demonstrated through the case of minefield traversal. By integrating explainable statistical models into reinforcement learning (RL), our approach evaluates subsystem accuracy and uncertainty in real time, enhancing overall reliability. Mine detection relies on two independent, imperfect predictors—an AI model and a human evaluator—each affected differently by environmental conditions. Statistical methods quantify prediction reliability, while RL optimizes decisions under uncertainty. Embedding explainable statistics into RL ensures interpretable outcomes, robust risk-based monitoring, and adaptability to changing operational parameters. We tested this framework using agent-based simulations, where AI and human systems collaboratively navigated uncertain minefields. Results showed improved decision transparency, AI adaptability, and real-time risk management. Designed for generalizability, the framework offers a scalable method for deploying reliable autonomous systems across safety-critical domains.
My STS research also explores trust in AI, but from a socio-technical lens. Using Actor-Network Theory (ANT), I analyze how trust in AI can be undermined by misaligned relationships between the network builder and actors within the network. I claim that the misuse of AI during the 2024 U.S. election resulted from breakdowns in these relationships. OpenAI, acting as the network builder, unintentionally enrolled social actors with conflicting interests and conceptual actors shaped by flawed assumptions—such as believing users would not weaponize the technology. These misalignments ultimately contradicted OpenAI’s mission to benefit humanity and contributed to public mistrust. My research uses this case to highlight the importance of anticipating both social and technical risks when designing AI systems.
Working on both projects simultaneously created value for each. The technical project deepened my understanding of AI’s capabilities and how it can be responsibly integrated into complex systems. This perspective informed my STS research, helping me grasp the technical roles within the actor-network. Meanwhile, the STS work emphasized that social factors are just as critical as technical functionality in determining a system’s success. It reinforced my commitment to becoming an inclusive engineer who actively considers potential failure points. In summary, these projects allowed me to analyze AI from both technical and socio-technical perspectives, with each enhancing the other.

Degree:
BS (Bachelor of Science)
Keywords:
Systems Engineering, Trust in Artificial Intelligence, Trust, Artificial Intelligence
Notes:

School of Engineering and Applied Science

Bachelor of Science in Systems Engineering

Technical Advisor: Hunter Moore, Matthew Burkett, William Scherer

STS Advisor: Benjamin Laugelli

Technical Team Members: Sami Saliba, Justin Abel, Andrew Evans, Hannah Palmer

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2025/05/09