Assessing the Viability of Universal Basic Income To Address Job Displacement Caused by Artificial General Intelligence; A Comparative Study of Learning Paradigms in Large Language Models via Intrinsic Dimension

Author:
Janapati, Saahith, School of Engineering and Applied Science, University of Virginia
Advisor:
Janapati, Saahith, University of Virginia
Abstract:

STS Abstract
We are living through what is perhaps the most important and consequential technological revolution in human history: the development of generally capable Artificial Intelligence (AGI) systems. Chatbot systems such as OpenAI’s ChatGPT, Google DeepMind’s Gemini, and Anthropic’s Claude have captured the world’s imagination and attention by displaying impressive capabilities and practical utility across a wide variety of domains such as education, software engineering (Achiam et al., 2023), law, and medicine (Saab et al., 2024). The aforementioned AI research laboratories are now engaged in a race to develop ever-more capable systems that can process multiple sensory modalities, take actions in digital and physically embodied forms, make long-term plans, and engage in complex reasoning.

AI scientists believe that these advanced capabilities could come to fruition within the upcoming decade, and possibly even sooner (Burns et al., 2023). What does the advent of such technologies imply for the global workforce? If advanced AI systems can perform all the tasks required of jobs cheaper, faster, and more effectively than their human counterparts, how will humans make a living?

This research paper aims to answer this question by assessing the need and viability of Universal Basic Income (UBI) to address job displacement that may occur as a result of the deployment of generally capable AI systems. For this analysis, Bruno Latour’s Actor-Network Theory (2017) and the framework of Technological Determinism (Héder, 2021) will be employed to understand the increasing role that AI systems will play in socioeconomic systems.

Technical Abstract
The performance of Large Language Models (LLMs) on natural language tasks can be improved through both supervised fine-tuning (SFT) and in-context learning (ICL), which operate via distinct mechanisms. Supervised fine-tuning updates the model's weights by minimizing loss on training data, whereas in-context learning leverages task demonstrations embedded in the prompt, without changing the model's parameters. This study investigates the effects of these learning paradigms on the hidden representations of LLMs using Intrinsic Dimension (ID). We use ID to estimate the number of degrees of freedom between representations extracted from LLMs as they perform specific natural language tasks. We first explore how the ID of LLM representations evolves during SFT and how it varies due to the number of demonstrations in ICL. We then compare the IDs induced by SFT and ICL and find that ICL consistently induces a higher ID compared to SFT, suggesting that representations generated during ICL reside in higher dimensional manifolds in the embedding space.

Degree:
BS (Bachelor of Science)
Language:
English
Issued Date:
2024/12/17