Teaching Machines to Teach Us: An Ethical and Responsible Pedagogical Application of Generative Artificial Intelligence; Psychosocial Interactions Between Large Language Models and Their Users
Douglas, Ian, School of Engineering and Applied Science, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Wayland, Kent, EN-Engineering and Society, University of Virginia
Artificial Intelligence (AI) is a field of technology that evokes the idea of computers bestowed with sentience, and as a result it is often enshrouded by a science fiction-inspired mysticism. Although modern computers are far from having been endowed with human intelligence, AI is at the forefront of current technological discussion. In its current state, generative artificial intelligence has come a long way and trained machine learning models can produce human-like text, images, and even replicate human speech, and now more than ever people express concerns over the impact these freely accessible tools can have on both discourse and education. Scholars have already begun to study the potential of text-based AI agents to be used for misinformation, manipulation, and persuasion. Additionally, experts have come to express concerns over the impact generative AI tools can have on the academic sphere, both in the classroom for students and in the potential deskilling of scholars. With the advent of the AI revolution, it is essential to understand how we as users can interact with generative AI models responsibly and beneficially in order to prevent the multitude of potential abuses of the technology, mitigate the potential harms of the technology, and maximize the potential benefits of the technology.
The technical report contained within this thesis portfolio, “Teaching Machines to Teach Us: An Ethical and Responsible Pedagogical Application of Generative Artificial Intelligence”, was created for the purpose of exploring a possible means by which generative AI could be integrated into academic coursework for the benefit of the students. Current AI-related issues in academics stem from the capacity of modern Large Language Models (LLMs) like OpenAI’s ChatGPT to supplant typical student research and work efforts by either synthesizing direct answers to assigned questions or by generating full project essays, which is expected to have a significant impact on student abilities later in their lives. This technical project seeks to outline the process by which an LLM would be feasibly designed to work within an academic context without posing the same issues as existing models. The research and design process involved the evaluation of text-model architectures such as the Transformer model and training processes such as Prompt-Based Learning, as well as the inclusion of additional capabilities such as Retrieval-Augmented Generation to improve output accuracy. The speculated future outcome of this project is its realization in a fully-fledged Large Language Model, and the expected outcome is that it may prove a useful resource for students across a number of demographics.
The STS research project contained within this thesis portfolio, “Psychosocial Interactions Between Large Language Models and Their Users”, is a literature review that seeks to understand current scientific understanding of how Large Language Models come to exhibit different behaviors and how human beings respond to LLMs in general and in relation to these behaviors. At present, there are mounting concerns about the sources of different behaviors in LLMs, i.e. sycophancy and other manipulative tendencies, and the opaque nature of how exactly training data and reinforcement learning directly affect LLM tendencies to display these behaviors have led to extensive experimentation and study. Additionally, the way in which people respond to LLMs and these behaviors has become a concern of late due to the capacity of these models to propagate misinformation and manipulate (albeit unintentionally on the part of the model itself). This literature review surveyed a corpus of texts from a variety of repositories of academic work and classified them based on whether the findings had implications for the source of LLM behaviors toward users and whether the findings had implications for human responsiveness to LLM output. The result was an interesting variety of currently speculated sources of LLM behaviors and psychological tendencies in human prompters. In some cases, LLM behaviors– such as the capacity for display of social cues– were attributed the cues having been prominent in the corpus of training data they were given in the pre-training process, while others such as sycophancy were speculated to be an unintended result of the reinforcement-learning process. The conclusion of this research project was that while extensive study has already been conducted on these topics, there are clear gaps in knowledge that must be addressed, especially considering that in most cases the attribution of the aforementioned LLM behaviors were merely speculatory.
The results of these projects were fruitful, and both of the projects achieved what I had set out to accomplish. The value of this work is apparent, as generative artificial intelligence is a topic of present public interest, and these projects are another step towards developing a greater understanding of the burgeoning field and its potential abuses and benefits. It was disheartening that the resources at my disposal did not allow me the opportunity to realize the educational large language model in the time allowed, but in the event that the LLM detailed in the technical project is developed it is possible that the costs of the training process may be offset by the use of a pre-trained model, as is described in the report.
BS (Bachelor of Science)
GenAI, Artificial Intelligence, Generative Artificial Intelligence, Langauge Models, Generative Language Models
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Rosanne Vrugtman
STS Advisor: Kent Wayland
Technical Team Members: Ian Douglas
English
2025/05/08