On-Device Intelligent Meeting Tool: A Privacy-Centric AI-Powered Assistant; The Facebook-Cambridge Analytica Scandal: Dangers of Data in Democracy

Author: ORCID icon orcid.org/0000-0003-1807-1627
Pamal, Akash, School of Engineering and Applied Science, University of Virginia
Advisors:
Seabrook, Bryn, EN-Engineering and Society, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Morrison, Briana, EN-Comp Science Dept, University of Virginia
Abstract:

Introduction
Individuals and companies today voice concerns about privacy when using AI-powered tools, with some companies restricting all access to AI-powered tools on company networks (Mok, 2023). However, the power of these programs to accelerate workflows in particular domains, particularly computer science has been well-shown (Naqbi et al., 2024). In order to provide this functionality without risking data loss, there needs to be a tool that focuses on privacy without sacrificing utility. In my technical work, I designed and implemented such a tool, leveraging existing open-source technologies to create a desktop application that provides the power of these AI-driven programs without sending any information to other companies. All processing is done on-device, with no sensitive information collected. Because both technical and social factors contributed to the current technological landscape with limited easy-to-use privacy-centric tools, it is necessary to understand the factors that contribute to a project’s success or failure, including technical, social, economic, and regulatory ones. I draw on the STS framework of actor-network theory to investigate how a lack of available alternatives drove a complacent culture around privacy (Cressman, 2009). This led to minimal regulatory oversight and multiple privacy failures that eventually led to the Facebook-Cambridge Analytica scandal in 2018, where data from 87 million Facebook users was sold and used for political targeting (Hu, 2020). If companies focus only on restricting access to existing technology but do not provide acceptable alternatives, then it risks creating a culture of dissent and reduced productivity within its employees. Because the challenge of providing privacy-centric AI-powered tools is socio-technical in nature, it requires attending to both its technical and social aspects to accomplish successfully. In what follows, I set out two related research papers: a technical project for developing an on-device AI-powered meeting notes tool for business, and an STS project for examining the institutional privacy failures that caused the Facebook-Cambridge Analytica events.

Capstone Project Summary
While there are many AI-powered meeting tools on Zoom, Google Meet, and other platforms, none offer complete privacy through on-device processing and an easy-to-use interface. I propose an AI assistant powered by Ollama, packaged in a user-friendly application, with the capability of providing critical insights during business meetings. I use the Electron framework to develop an application frontend that runs natively on Windows, MacOS, and Linux. The core business logic runs primarily in JavaScript and Python and interfaces with Ollama’s Large Language Models (LLMs). We anticipate user testing to reflect that our app is intuitive and easy to use. While testing the application in a corporate environment is likely outside the scope of this project, feedback from an academic environment will likely reflect the effectiveness and intuitiveness of the application. In the future, we will continue to improve the app as faster and smaller LLMs become available through Ollama. Additionally, we may seek privacy and security certifications so the app can be marketed to a corporate audience.

STS Research Paper Summary
I explore the events surrounding the sale of personal data from 87 million Facebook users, including the involved parties, causes, and aftermath. What role did consumers, Facebook, Cambridge Analytica (CA), and the government play in the series of events culminating in the sale of data from millions of users? How do these actors, and the interactions between them, form a network that can be analyzed through the lens of actor-network theory? I expect to identify the contributing factors that led to these events, focusing on the tensions between actors. For example, the public may demand increased regulation on social media companies, but the companies themselves oppose this regulation. Meanwhile, the social media companies rely on the public to use their platforms to generate profit. By clearly identifying these contributing factors, I hope to be able to extend my findings to current topics. For example, tools like ChatGPT collect user data by default. Additionally, the large language models (LLMs) powering these tools may be trained on copyrighted material, sparking concerns about ownership of the outputs from these models. The relevant actors in these modern situations are many of the same actors in historical events like the Facebook-Cambridge Analytica scandal. This project contributes to our STS understanding of privacy in the age of AI and big data through the lens of actor-network theory. By studying the historical events, our society is better equipped to navigate these situations today.

Concluding Reflection
Separately, I gained both technical knowledge and refined my STS skills. However, conducting both projects together provided additional benefits even beyond this. One of the main benefits of working on both projects simultaneously is that I gained a better understanding of the privacy-related technology involved in modern LLM tools while exploring the STS aspects of privacy in technology through a case study. This way, I was able to apply my STS understanding into the system I was building. For example, throughout my STS research, I discovered a plethora of evidence for preventing companies from collecting data from users in the first place. Simultaneously, I was writing an app that conducted all processing on-device to prevent the companies from ever receiving the data. Working on the related technical and STS projects helped me gain a well-rounded understanding of the landscape of data privacy in modern software, which will be important as I join the workforce and contribute to the next generation of software.

Degree:
BS (Bachelor of Science)
Keywords:
Data privacy, Facebook, Cambridge Analytica, Llama
Notes:

School of Engineering and Applied Science

Bachelor of Science in Computer Science

Technical Advisor: Rosanne Vrugtman

STS Advisor: Bryn Seabrook

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2025/05/04