Academic Integrity in Crisis: A Systematic Analysis of Questionable Research Practices; Artificial Intelligence Adoption at the University of Virginia: Identification and Analysis of Stakeholders in the Debate
Downer, David, School of Engineering and Applied Science, University of Virginia
Forelle, MC, EN-Engineering and Society PV-Institutional Research and Analytics EN-Engineering and Society PV-Summer & Spec Acad Progs, University of Virginia
Bolton, Matthew, EN-SIE, University of Virginia
Scherer, Bill, EN-SIE, University of Virginia
There is a strong connection between my technical work and my STS research, both are heavily involved in the debate over Artificial Intelligence’s (AI) place in academia. In my technical research, we examine the effect that AI has had on the scholarly publication space, whereas my STS research examined the adoption of AI at the University of Virginia. In my technical work, we found a much deeper issue within the scholarly publishing space that went far beyond the influence of AI. In actuality, the system is set up to encourage Questionable Research Practices (QRPs) such as AI use. The core incentives to engage in QRPs are promotion & tenure, notoriety, and funding. Additionally, I did a quantitative analysis of the scholarly publication output from 2010 to 2024 using data from Scopus. I found no statistically significant change in publication output after the introduction of AI, marked by the release of ChatGPT in November 2022. This points again at the deeper issues plaguing the scholarly publication industry, and cements that AI is not a strong driver of QRPs. In my STS research, I used Actor-Network theory to analyze the adoption of AI at the University of Virginia, and how it has changed the interconnected network of human and non-human actors within the university community. In the end, I found that AI is driving a wedge between students and faculty, forever altering their relationship and, most importantly, how knowledge is disseminated to students within the university community. Scientific misconduct has emerged as a growing risk to the academic knowledge base. Questionable research practices such as falsified peer review, predatory conferences, and citation gaming in journal publications have become more prevalent in recent years since the release of ChatGPT in November 2022. As researchers face intense pressure to publish quickly amidst the demand for scholarly findings and literature, the underlying structure of the publishing and research system promotes opportunities for misconduct. The publish-or-perish culture within academia incentivizes scholars, institutions, and journals to engage in questionable behavior, threatening scientific integrity and public welfare. First, this project synthesizes and classifies the scale of scholarly misconduct in academia during the digital age through a comprehensive taxonomy of questionable research practices. Additionally, a quantitative analysis examines the effect of Artificial Intelligence on the scholarly publication output. Through a literature review and conversations with library science experts, types of scientific misconduct will be classified in the form of a hierarchical taxonomy, categorized by perpetrator and type of misconduct. The taxonomy and scope will be validated through subject matter expert review. In my STS Research Paper, I examined the effort to implement AI at the University of Virginia through primary source documents and two reports authored by the University’s Task Force on AI. In my review, I also deep dive into the effects of cognitive offloading, which is the act of using any external tools to reduce cognitive load when completing a task. I found that it is key we use AI within certain boundaries, to avoid excessive cognitive offloading, which can hamper long-term memory and recollection capabilities. I found in my examination of AI adoption at UVA that university policies are failing to keep pace with the ever-changing environment, which is creating a cloud of uncertainty around the technology. The students are unsure how to properly implement and utilize the technology within the bounds of the Honor Code, creating a sense of mistrust between the Faculty and the Students. Additionally, I examine how AI is now the students' first source of information for answers to their questions. This is a major issue because the information purveyed by the AI can contradict the information and answers provided by faculty. I suggest that the University investigate how to align the output of the AI systems with the learning goals of the class, to provide a clearer and more cohesive learning experience. Ultimately, simultaneously working on both projects provided an incredible learning experience where I was able to take lessons learned on one, and apply it to the other, resulting in a drastic improvement. For example, in the out-scoping phase of my technical project, I learned a great deal about how to identify Questionable Research Practices, and most importantly which publishers were strict in preventing them, and which publishers enabled them. I was able to take this knowledge and apply it to filter out seemingly high-quality papers from my sociotechnical project’s literature review that on the surface, seemed to be high quality. This greatly increased the quality of my sociotechnical project. In the end, I have learned an incredible amount of information that I will carry with me through my professional career.
BS (Bachelor of Science)
Artificial Intelligence, Academic Integrity, Artificial Intelligence Adoption, Questionable Research Practices
School of Engineering and Applied Science
Bachelor of Science in Systems Engineering
Technical Advisor: Bill Scherer, Matthew Bolton
STS Advisor: MC Forelle
Technical Team Members: Sean Ferguson, Riley Tomek, Anna Fisher
English
All rights reserved (no additional license for public reuse)
2025/05/09