Modular Cloud-Based Operating Systems for Networking Hardware: Reducing Footprint and Enhancing Efficiency; How Do Modern AI Systems Amplify Colonial Historical Distortions, and What Interventions Can Address the Resulting Epistemic Injustices?
Haid, Matthew, School of Engineering and Applied Science, University of Virginia
Davis, William, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
My STS research paper, titled “How Do Modern AI Systems Amplify Colonial Historical Distortions, and What Interventions Can Address the Resulting Epistemic Injustices?”, did not directly correlate with the technical portion of my thesis, “Modular Cloud-Based Operating Systems for Networking Hardware: Reducing Footprint and Enhancing Efficiency.” However, both projects were deeply important to me in different ways. My motivation to investigate colonial historical distortions through an STS lens stemmed from a transformative study abroad experience in Kenya during a J-term, where I took the course Swahili Cultures: Then and Now. During this program, I engaged with UVA anthropologists who spoke candidly about how westernization was eroding cultural pride and devaluing liberal arts education, particularly in formerly colonized countries. One powerful moment that stayed with me was a conversation about a student in Kenya who believed that his country’s history lacked value and wasn’t worth studying—because it was seen as “primitive” compared to the grand narratives of Western civilization. This moment illustrated the long shadow of colonial epistemologies. While touring Fort Jesus in Mombasa, I encountered these distortions firsthand. Our guide spoke passionately about the Swahili laborers and techniques that shaped the fort’s construction—yet none of these contributions were formally recognized in the official narrative. This experience deeply impacted me. It raised a pressing question: how is history being told, and who decides what counts as valid knowledge? As a Computer Science major, I couldn’t ignore the role that AI now plays as a global producer and disseminator of knowledge. This intersection led me to explore how AI systems, particularly Large Language Models (LLMs), contribute to the spread and amplification of colonial historical distortions. I wanted to examine not just the outcomes but the structural causes behind these epistemic injustices—and propose meaningful interventions. By contrast, my technical project was motivated by a summer internship and the impactful work I conducted at Cisco. I was proud of my contributions and fascinated by the broader mission of building more efficient and scalable systems for the networking world. I saw my thesis as an opportunity to reflect on that work, learn from it, and solidify my understanding in a way that would carry forward into my professional career.
The technical portion of my thesis focused on improving the efficiency, flexibility, and scalability of Cisco’s networking hardware by transitioning from a monolithic to a modular, cloud-based operating system. By developing a FUSE-based translation layer called chasfuse and integrating it with Cisco’s CEO process manager, the system resolved significant dependency issues, optimized resource utilization, and reduced software bloat. The modular design also improved security by minimizing attack surfaces and enabled seamless updates and maintenance without disrupting service. This architecture supports the growing need for high-availability, low-footprint networking solutions and delivers clear operational benefits for both developers and end users.
My STS research argues that AI-generated historical narratives amplify colonial distortions by reinforcing dominant epistemologies, neglecting historical complexity, and marginalizing non-Western perspectives. Using frameworks such as epistemic injustice and Technological Mediation Theory, I demonstrated that AI systems like ChatGPT, Gemini, and LLaMA do not merely reflect neutral information but actively shape how historical knowledge is constructed and consumed. Case studies, including the Fort Jesus in Kenya and Timur’s economic policies in Samarkand, illustrate how AI can erase or distort historical perspectives. The paper underscores the urgent need for transparent epistemic practices in AI development: models must communicate uncertainty clearly, provide reliable citations, and acknowledge multiple perspectives to prevent the perpetuation of historical erasure. Importantly, while AI can be designed to reduce bias, it cannot be fully objective. Human oversight, transparency, and active engagement are essential in ensuring that AI-generated knowledge does not contribute to further epistemic injustice.
My thesis journey revealed how seemingly unrelated domains, like networking hardware and colonial history, are shaped by broader sociotechnical systems. In designing modular operating systems, we’re not just solving technical problems—we’re also shaping the organizational efficiency and long-term accessibility of vital infrastructure. Similarly, in designing AI systems, we’re not just building tools—we’re deciding whose knowledge gets preserved and whose gets erased. STS perspectives help us see that technology is never neutral. Every design decision carries cultural weight and historical baggage. Understanding this allows engineers to anticipate unintended consequences and be more inclusive in their work. It reminds us that technology and society co-construct one another—and that our responsibility goes beyond code and hardware to include the lives, values, and histories that technology affects. By bringing STS into technical spaces, we create more reflective, responsive, and just engineering practices.
BS (Bachelor of Science)
Artificial Intelligence, Cloud Computing, Computer Networks, Historical Distortions
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Rosanne Vrugtman
STS Advisor: William Davis
English
2025/05/08