Trusted Artificial Intelligence in Life-Critical and High-Risk Environments; The Regulatory and Ethical History of Landmine Usage and Parallels in Autonomous Control
Abel, Justin, School of Engineering and Applied Science, University of Virginia
Moore, Hunter, EN-SIE, University of Virginia
Seabrook, Bryn, EN-Engineering and Society, University of Virginia
The themes in this Thesis Portfolio, both technical and social, are predicated on the rapid advancements of autonomous and artificially intelligent systems in recent years. The technical Capstone Project, “Trusted Artificial Intelligence in Life-Critical and High-Risk Environments,” explores methods for designing and operating intelligent, autonomous systems under uncertain conditions in military scenarios. Specifically, technical research was focused on designing machine learning and prediction models to optimally route troops through a minefield, maximizing safety and efficiency. The sociotechnical STS Research Paper, “The Regulatory and Ethical History of Landmine Usage and Parallels in Autonomous Control,” focuses on the ethics of historical anti-personnel (AP) landmine usage. In particular, it analyzes parallels between future autonomous military systems ethics and regulations and current AP landmine value systems and limits. The legacy of landmine warfare is a topic of interest because it explores the morality of utilizing non-human decision systems in life and death scenarios. This applies to both the technical and social problems explored in this thesis. In the technical thesis, an intelligent reinforcement learning (RL) agent is trained to maximize a reward function that, given various environmental conditions, recommends minefield traversal actions to troops and machinery. This is ethically challenging because the use of artificial intelligence algorithms programmed to mathematically maximize objective functions may fail to capture human conditions and considerations, putting civilians and soldiers at risk. The same is true of AP landmines, one of the focuses of the sociotechnical analysis, which detonate indiscriminately based on some physical phenomenon experienced by the device. Effectively, landmines are mechanically programmed, the same way that algorithms and autonomous weapons systems are mathematically programmed.
The Capstone Project in this thesis portfolio is titled “Trusted Artificial Intelligence in Life-Critical and High-Risk Environments.” With the growing integration of artificial intelligence into autonomous decision-making, ensuring trust in these complex systems is crucial, particularly in life-critical applications where failures can be catastrophic. Existing AI-driven autonomous technology often operates under high uncertainty due to its black-box nature, demanding greater accountability, reliability, and transparency for mission success. The technical paper proposes a generalizable systems engineering framework for building trust in autonomous systems, demonstrated in the context of minefield traversal, a life-critical control problem. By integrating explainable statistical models into reinforcement learning, this approach evaluates subsystem accuracy and uncertainty in real time, significantly enhancing reliability. Mine detection is supported by two independent, imperfect predictors, an AI model and a human evaluator, each affected differently by varying environmental conditions. Statistical methods quantify prediction reliability, while RL optimizes decisions under uncertainty. Embedding explainable statistics into RL decision-making ensures interpretable outcomes, robust risk-based monitoring, and adaptability to changing operational parameters. This approach was tested through an agent-based simulation where AI and human detection systems collaboratively navigated uncertain minefields. Results indicate improved decision transparency, AI adaptability, and real-time risk management. Explicitly designed for generalizability, the framework presents a scalable method to establish reliable autonomous systems across various safety-critical domains. Future work will refine trust metrics and explore applications in a broader context.
The STS Research Paper completed in this thesis portfolio is titled “The Regulatory and Ethical History of Landmine Usage and Parallels in Autonomous Control.” As the development of autonomous military systems accelerates, understanding the regulatory and ethical frameworks governing their use is imperative. The paper explores lessons drawn from historical regulations and moral considerations surrounding anti-personnel landmines to inform contemporary discourse on autonomous weapons. It answers the question: What are the lessons that can be learned about the future of regulatory and moral mandates surrounding autonomous military systems through the context of historical regulations and ethics addressing anti-personnel landmines? These technologies are linked by their capacity to make "decisions" based on either mechanical or algorithmic programming aimed at maximizing objective functions. Utilizing Actor-Network Theory, both anti-personnel landmines and autonomous military systems are conceptualized as non-human actors operating within networks of regulatory and ethical considerations. By analyzing the evolution of international laws, such as the Ottawa Treaty, alongside the ethical debates that emerged in response to the humanitarian impact of anti-personnel landmines, the sociotechnical analysis identifies key themes relevant to the regulation of autonomous military technologies. The dynamics of past anti-personnel landmine regulatory frameworks provide an analogous structure for predicting the trajectory of autonomous weapon regulations, offering insights into potential social and military implications. This work contributes to science and engineering by highlighting the intricate relationship between technologies, regulations, and ethics, while demonstrating how historical contexts can inform future governance of emerging systems.
The outcome of this research is twofold. First, a better understanding of how artificial intelligence algorithms can be conscientiously built into large-scale systems. Additionally, an understanding of how regulatory and ethical frameworks from the past, specifically with regard to anti-personnel landmines, can be applied to autonomous, intelligent technology in military scenarios. Both technically and socially, this is a critical problem for the global community to understand and solve. Otherwise, as artificial intelligence and automation continue to advance, lives and societies could be put at risk by minimally supervised algorithms. The technical Capstone Project and the sociotechnical STS Research Paper have each shaped the other's approach and findings, resulting in more integrated perspectives and detailed understanding. Working on the technical design of a machine learning algorithm for usage in a life-threatening mine traversal context has been extremely valuable in understanding the risks of utilizing and building autonomous military systems. Human biases are inevitable, and assumptions must be made. This has provided valuable context for the history of landmine design and the current state of autonomous military system design. Without exploring and synthesizing the regulatory, ethical, and design histories of lethal military technologies, it is far more challenging to perform conscientious social analysis or build ethically acceptable autonomous control systems for life-critical scenarios.
BS (Bachelor of Science)
Machine Learning, Landmines, Autonomous Systems, Simulation, Actor-Network Theory
School of Engineering and Applied Science
Bachelor of Science in Systems Engineering
Technical Advisor: Hunter Moore
STS Advisor: Bryn Seabrook
Technical Team Members: Stephen Durham, Andrew Evans, Hannah Palmer, Sami Saliba
English
All rights reserved (no additional license for public reuse)
2025/05/07