Fostering AI Literacy: Designing an Interactive Tool for Responsible Use; The Role of Generative AI in Reshaping Learning: Institutional Adoption, Student Use, and Long-Term Consequences
Kappukattil, Leya, School of Engineering and Applied Science, University of Virginia
Rider, Karina, EN-Engineering and Society, University of Virginia
Morrison, Briana, EN-Comp Science Dept, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Both the STS research paper and the capstone project paper examine the expanding role of generative AI in higher education, but from distinct perspectives. The STS paper offers a critical analysis of the broader social and educational consequences of student AI use, while the capstone project presents a design proposal aimed at fostering responsible AI engagement among students. Together, these two works highlight a combined focus on understanding AI’s impact and developing practical interventions to address emerging challenges.
The STS paper argues that while generative AI tools such as ChatGPT can increase productivity and streamline learning, they also pose significant risks to students’ development of critical thinking and independent learning skills. Using Actor-Network Theory (ANT) as a theoretical framework, the paper explores how students’ relationships with AI tools are reshaping traditional learning processes. Rather than simply saving time, these tools are assuming core cognitive tasks—like drafting, brainstorming, and problem-solving—thus altering the student’s role in academic work. From the perspective of ANT, this shift constitutes a delegation of intellectual labor to AI, leading to a redistribution of agency away from the student and toward the technology itself.
The paper also examines how institutions are responding to this shift. Some universities have entered formal partnerships with AI companies and are integrating these tools into their infrastructures. Others are revising academic policies and honor codes to address AI-generated content. However, many of these efforts appear reactive and insufficient. Few institutions are currently providing structured education on how to use AI ethically and strategically. In the absence of clear guidance, students often turn to AI as a shortcut, contributing to a phenomenon known as cognitive offloading—where essential learning opportunities are outsourced to automated systems, potentially undermining long-term skill development.
This issue is further compounded by evolving workforce expectations. Employers increasingly value graduates who can collaborate with AI systems while maintaining strong critical thinking and ethical reasoning skills. Despite this, many universities are not adequately preparing students to navigate this dual demand. Students may leave school with experience using AI tools to complete tasks, but without the deeper understanding required to apply these tools thoughtfully in real-world contexts.
Addressing this gap is the focus of the accompanying computer science capstone proposal, which outlines the design for an interactive AI literacy module tailored for students. The proposed module aims to teach responsible, effective engagement with generative AI through a combination of scenario-based learning, real-world case studies, and adaptive quizzes. The design draws on principles of effective pedagogy and digital literacy to encourage students not only to acquire technical proficiency but also to reflect critically on when and how AI should be used in academic and professional settings.
Although the module has not yet been developed, the proposal envisions it as a tool that empowers students to make informed decisions about technology use. Rather than relying solely on warnings or bans, the module would create space for guided reflection, equipping students with the skills and judgment needed to navigate complex sociotechnical environments. In this way, the project offers a design-based response to the concerns raised in the STS research.
The connection between the two papers lies in the shared argument that universities must take a more intentional approach to integrating AI into education. The STS paper makes the case that AI tools are not neutral; they shape the way students think, learn, and engage with academic work. The capstone proposal responds with a concrete solution: a learning module designed to help students understand and navigate these shifts. Both papers emphasize that fostering AI literacy is essential—not just for technical fluency, but for preserving the core goals of higher education.
The conclusion of the STS paper stresses that the conversation around AI must go beyond tool access. It must also examine the underlying reasons students are turning to AI in the first place. Many are overwhelmed by heavy workloads, competitive pressures, and a constant demand for output. In such an environment, AI is not merely a convenience—it becomes a survival tool. Therefore, any effort to regulate or guide AI use must also confront the root causes of student stress. This could involve rethinking assessment models, prioritizing quality over quantity, and allowing for greater flexibility without sacrificing academic rigor.
Ultimately, both the STS research and the capstone project call for a shift in how higher education engages with AI. Rather than treating AI solely as a threat or shortcut, both works argue that it should be viewed as a powerful tool that demands thoughtful integration. Through a combination of policy reform and educational design, institutions have the opportunity to support students in using AI to enhance—not replace—their intellectual growth.
BS (Bachelor of Science)
Artificial Intelligence, Generative AI, Skill Development, Higher Education
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Briana Morrison, Rosanne Vrugtman
STS Advisor: Karina Rider
English
2025/05/03