Trusted Artificial Intelligence in Life-Critical and High-Risk Environments; Comparing Trust in Generative AI in College Education to Other Initially Untrusted Technologies

Author:
Evans, Andrew, School of Engineering and Applied Science, University of Virginia
Advisors:
Forelle, MC, Engineering and Society, University of Virginia
Scherer, William, EN-SIE, University of Virginia
Burkett, Matthew, EN-SIE, University of Virginia
Abstract:

Artificial intelligence (AI) is a field of study where humans attempt to make technology that have decision powers similar to humans and have the ability to take a load off of a human’s workload. This field is rapidly growing and has many ways it can be possibly implemented to improve the everyday life of humans. The technology is already being used in many fields such as medicine, banking, sports, and so many more. My capstone project and STS research project focus on two more industries where AI is abundant and has vast potential, military applications and educational settings. These fields are connected by the potential of AI to make decisions faster than humans can and inform the user of the technology of their desired answers faster than human subject matter experts can. My capstone project focuses on using AI decision making to aid human subject matter experts in deciding what route troops and their equipment should take to pass safely and quickly through a given minefield. My STS research project focuses on the use of generative AI technologies in college education, and its path towards a day where it can be used effectively and not violate the rules set out by universities and institutions. By researching these two topics I hope to provide more understanding into how trust is built in non-human systems and provide an explanation into how AI will be used in the future for the betterment of society.
With the growing integration of AI into autonomous decision-making, ensuring trust in these complex systems is crucial, particularly in life-critical applications where failures can be catastrophic. Existing AI-driven autonomous technology often operates under high uncertainty due to its black-box nature, demanding greater accountability, reliability, and transparency for mission success. My capstone project proposes a generalizable systems engineering framework for building trust in autonomous systems, demonstrated in the context of minefield traversal, a life-critical control problem. By integrating explainable statistical models into reinforcement learning (RL), this approach evaluates subsystem accuracy and uncertainty in real time, significantly enhancing reliability. Mine detection is supported by two independent, imperfect predictors, an AI model and a human evaluator, each affected differently by varying environmental conditions. Statistical methods quantify prediction reliability, while RL optimizes decisions under uncertainty. Embedding explainable statistics into RL decision-making ensures interpretable outcomes, robust risk-based monitoring, and adaptability to changing operational parameters. This approach was tested through an agent-based simulation where AI and human detection systems collaboratively navigated uncertain minefields. Results indicate improved decision transparency, AI adaptability, and real-time risk management. Explicitly designed for generalizability, this framework presents a scalable method to establish reliable autonomous systems across various safety-critical domains. Future work will refine trust metrics and explore applications in a broader context.
My STS research paper focuses on finding the relationship between generative AI in college education and similar technologies that have been introduced into society that have fundamentally changed the way some aspects of society functions. Specifically, I am interested in finding why generative AI technologies are untrusted in today’s college educational settings, and if there is a path forward to make the technologies a successful aspect of college education. My research involves using reputable journal articles and other forms of media to gain an understanding on how generative AI tools have been used and viewed since the release of the most groundbreaking tool, ChateGPT, in November of 2022. After understanding the impacts these technologies have had on college education, I will use the same categories of sources to understand how closely generative AI technologies’ path follows that of other technologies that were untrusted by society upon their initial release. With this research I will be analyzing the technologies through a lens of moral panic, and how this psychological phenomenon impacts how technologies are utilized in our society.
Working on these projects simultaneously gave me greater insight into how vast of a field artificial intelligence is, and how many uses it has to potentially change how our society operates efficiently. The tools that are created with AI can give humans the opportunity to work on more complex issues that are beyond the scope of current AI technologies, while leaving mundane and repetitive tasks for AI to complete. Working on these projects simultaneously also made me realize how few people knew about the vast possibilities of AI technologies before the release of ChatGPT and similar large language models. AI technologies have been working in the background of our lives for many years, and yet only once they were publicly accessible did many people realize how complex and useful they can be for everyday tasks. With greater awareness the public may realize how important these technologies have become to our society and greater interest in these technologies could lead to a greater understanding on how to make them trusted and useful in our society.

Degree:
BS (Bachelor of Science)
Keywords:
Education, AI, ChatGPT
Notes:

School of Engineering and Applied Science

Bachelor of Science in Systems Engineering

Technical Advisor: Hunter Moore

STS Advisor: MC Forelle

Technical Team Members: Sami Saliba, Justin Abel, Stephen Durham, Hannah Palmer

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2025/05/09