Machine Learning: Virginia Automobile Accident Severity Predictive Modeling & Forecasting; Analysis of AI Marketing Rhetoric: Understanding Public Perception & Policy Implications
Seiden, Joshua, School of Engineering and Applied Science, University of Virginia
Neeley, Kathryn, EN-Engineering and Society, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
From Practice to Perception: Deep Learning Applications and Sociotechnical Insights into AI Adoption
Between 2022 and 2023, the release of notable machine learning models surged by 148%, highlighting the field’s exponential growth. This momentum, marked by the release of ChatGPT in November 2022, brought AI to mainstream attention and raised critical questions about its societal impact. Motivated by this wave of innovation, my undergraduate computer science studies have focused partly on the technical and societal dimensions of AI adoption. My capstone project and STS research explore these themes from complementary angles: the design and application of deep learning models in practical contexts and the societal narratives shaping AI perception. For my capstone project, my team developed models to classify car crash severity, identify contributing factors, and predict trends, offering insights for real-world applications. My STS research examined how AI marketing rhetoric, particularly for Large Language Models, influences public understanding and governance by shaping mental models. Together, these projects provide a view into AI’s technical capabilities and the narratives that shape its adoption. STS frameworks are integral to engineering practice as they position technical work within its broader cultural and organizational context, ensuring innovations are both effective and ethically responsible.
The technical portion of my thesis focused on developing deep learning models to classify car crash severity, identify contributing factors, and predict crash trends. Using extensive data on crash-related conditions from the Virginia Department of Transportation (VDOT), the models offered actionable insights that could inform decisions by VDOT. In addition to corroborating existing safety strategies, the model revealed high-risk conditions that warrant further investigation and investment, such as the influence of road design elements on crash severity. The results also provided practical applications in identifying safer travel times, aiding transportation planners and drivers. This work demonstrates the potential of machine learning to address complex problems in public safety, paving the way for more data-driven infrastructure management.
In my STS research, I explored how marketing rhetoric surrounding Large Language Models (LLMs) influences mental models, frameworks through which individuals conceptualize and interact with AI systems. By analyzing corporate analogies and narratives, my research uncovered that these rhetorical strategies often emphasize accessibility and utility while downplaying critical issues, limitations, and societal implications. For example, LLMs are frequently compared to human assistants, a framing that can obscure the technical realities of their operation and overstate their capabilities. This skewed perception can misinform user expectations and policymaking. My research highlights the importance of crafting more balanced narratives that transparently address the strengths and weaknesses of AI. My STS research aims to provide a better understanding of mental model formulation for the public and decision makers to assist in fostering the careful evaluation necessary for responsible adoption and governance of these technical capabilities.
Adopting a sociotechnical perspective has profoundly shaped how I approach engineering challenges, emphasizing the interplay between technical innovation and societal context. Through STS frameworks, I learned to critically evaluate not just what technology does but how it fits into broader systems of human goal-oriented activity, organizational structure, and cultural norms. For example, the technical success of deep learning models for crash prediction is only meaningful if they account for real-world usability, such as how transportation agencies or drivers interpret and act on insights. Similarly, understanding how mental models shape public perceptions of AI highlights the ethical responsibility of engineers to communicate their work transparently and avoid perpetuating misunderstandings. Ultimately, integrating STS perspectives into engineering practice fosters a more thoughtful approach to technical capability development, ensuring innovations are not only technically robust but also ethically and socially responsible.
BS (Bachelor of Science)
Machine Learning, ML, Artificial Intelligence, AI, AI Marketing Rhetoric, Applied Machine Learning
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Rosanne Vrugtman
STS Advisor: Kathryn Neeley
Technical Team Members: Saarthak Gupta, Agi Luong
English
All rights reserved (no additional license for public reuse)
2024/12/13