Navigating Ethical AI Adoption in Predictive Maintenance for SMEs: Lessons from the COMPAS Case; Implementation of Retrieval-Augmented Generation Using OpenAI Models for Enhanced Performance and Efficiency

Author:
Acharya, Ayush, School of Engineering and Applied Science, University of Virginia
Advisors:
Neeley, Kathryn, EN-Engineering and Society, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Abstract:

Sociotechnical Synthesis
(Executive Summary)
Ethics and Efficiency: Examining AI Adoption in Predictive Systems
“We are not only responsible for what we do, but also for what we fail to do”- Molière

The transformative potential of artificial intelligence (AI) and machine learning (ML) captivates me with its ability to reshape industries, improve efficiencies, and solve complex problems. Yet, alongside this fascination, I feel a profound sense of responsibility and fear about the consequences of developing systems that may unintentionally harm individuals or society. This duality—excitement for the opportunities AI offers and caution for its ethical implications—has motivated my thesis portfolio. My work combines a technical project on Retrieval-Augmented Generation (RAG) systems with a socio-technical research study on ethical AI adoption in small and medium-sized enterprises (SMEs). Together, these projects reflect my desire to explore how engineering practices can balance machine learning innovation with accountability and ethical responsibility.
The technical portion of my thesis involved producing a Retrieval-Augmented Generation (RAG) system, motivated by my internship with a company where I aimed to build a chatbot capable of providing relevant and company-specific information. This full-stack application, built with Flask for backend, React for frontend, and hosted on AWS, integrates external document retrieval into AI-generated responses. By leveraging document embeddings and retrieval mechanisms, the system improves the contextual relevance of OpenAI GPT model outputs while eliminating the need of costly fine-tuning. While the application successfully achieved its goal of enabling dynamic and domain-specific interactions, tasks such as formal benchmarking and multi-cloud integration remain. Nonetheless, my project highlights how scalable and efficient solutions can enhance AI usability in organizational contexts.
In my STS research, I examined the ethical challenges SMEs face when implementing AI for predictive maintenance, using the COMPAS (a recidivism software tool) algorithm as a cautionary case study. COMPAS was a relevant case study of choice as it dealt with implementing an AI system developed by an SME itself. COMPAS’s machine-learning based algorithm revealed significant racial biases, showing how an AI system that lacks a focus on fairness can amplify existing social inequalities. The key finding of my STS research was the distinction between accountability and responsibility. Accountability implies a reactive approach, where blame is assigned only after harm has occurred. Responsibility, in contrast, emphasizes proactive, preventative measures to avoid harm before it arises. This distinction underscores the need for SMEs to foster a culture of responsibility by embedding fairness and transparency into AI systems from the outset. Guided by Frank W. Geels’s Multi-Level Perspective (MLP) framework, my research analyzed systemic resistance to accountability across three levels: niche (developers), regime (industry standards), and landscape (broader socio-economic forces). This analysis revealed how ethical failures, like those seen in COMPAS, emerge from gaps at all levels and proposed actionable strategies for SMEs to mitigate these risks.
By synthesizing these projects, I gained a deeper appreciation for how technical, organizational, and cultural elements intersect in engineering practice. The RAG system highlights the importance of designing technical solutions that are adaptable and responsive to user/company needs, while my STS research emphasizes the societal consequences of neglecting ethics in AI deployment. Using the Multi-Level Perspective, I came to understand how existing gaps in accountability at the niche, regime, and landscape levels can lead to systemic failures like those seen in COMPAS. Applying the MLP framework to my technical project showcases that integrating accountability and fairness into AI systems requires addressing not just technical functionality, but also the broader social and organizational structures they operate within. This socio-technical perspective reinforces the importance of designing engineering systems that are not only technically innovative but also aligned with ethical standards at every level (niche, regime, landscape) of socio-technical interaction.

Degree:
BS (Bachelor of Science)
Keywords:
retrieval augmented generation, responsibility accountability, machine learning, artificial intelligence
Notes:

School of Engineering and Applied Science

Bachelor of Science in Computer Science

Technical Advisor: Rosanne Vrugtman

STS Advisor: Kathryn A. Neeley

Technical Team Members: Ayush Acharya

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2024/12/17