How has China’s use of artificial intelligence during COVID-19 pandemic influenced its expansions of digital authoritarianism?

Author:
Tran, Kyle, School of Engineering and Applied Science, University of Virginia
Advisors:
Carrigan, Coleen, EN-Engineering and Society, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Forelle, MC, EN-Engineering and Society, University of Virginia
Abstract:

Both my technical capstone and STS research project center around China’s evolving censorship system, commonly known as the Great Firewall. While they approach the topic from different angles, they complement each other in meaningful ways. My STS project focuses on a case study of how China’s censorship system, driven by artificial intelligence (AI), evolved during the COVID-19 pandemic. In contrast, my technical project proposes the design of a text classification system that emulates China’s AI-based censorship methods. The goal is to reverse engineer censorship filters to reveal their limitations and explore how resistance emerges. While the STS paper emphasizes the sociopolitical impacts of AI-censorship, the technical project focuses on its mechanisms and vulnerabilities, allowing me to understand not just how censorship is imposed, but also how it is navigated and resisted.

My technical project explores how AI models are used to simulate censorship systems and how these same systems can be reverse engineered. Specifically, it examines the rise of Sensitive Word Culture—creative linguistic strategies used by Chinese citizens to evade digital censorship. The project involves designing a text classification system to simulate censorship filters using natural language processing (NLP), with the ultimate goal of understanding how users can bypass such filters. The pipeline includes four major stages: data collection and preprocessing, model experimentation, evaluation, and conclusions. The proposed machine learning models are Multinomial Naive Bayes (MNB) and Convolutional Neural Networks (CNN), with the intent of training them on an open-source dataset containing around 200,000 uncensored and 15,000 censored posts from Chinese social media. A key objective of the project is to improve upon substitution-based censorship evasion methods, incorporating internet slang and phonetic alternatives such as "ZF" for zhèngfǔ (政府, government). After training, reverse engineering will be applied to identify terms likely to trigger censorship and those that might evade it. Ultimately, this work will be used to test on Weibo, a popular Chinese social media app. Evaluation metrics include accuracy, F1-score, and precision. Through this work, the project aims to provide insight into the limitations of automated censorship and support resistance through technical understanding.

My STS research project investigates the broader societal implications of AI-driven censorship during the COVID-19 pandemic in China. Using the Social Construction of Technology (SCOT) framework, I examine how the Chinese Communist Party (CCP) shaped the use of AI-censorship to serve its evolving political goals and protect its reputation. I apply the SCOT concept of interpretative flexibility to show how the function of AI-censorship changed over time—from suppressing early whistleblowers and domestic unrest during the beginning of the pandemic to later reframing China’s role in the pandemic through strategic information control and mask diplomacy-–a foreign policy used by China to deflect criticism for their initial response to COVID-19. I argue that AI enabled China’s censorship to shift from reactive to proactive, automating the suppression of dissent and reinforcing digital authoritarianism. Moreover, the normalization of AI-censorship during the pandemic sets a dangerous precedent, offering a model for other authoritarian regimes looking to control information with minimal human oversight. My paper ultimately shows how technological tools, shaped by political motives, can deepen state control and suppress civil liberties under the guise of crisis management.

Working on both projects allowed me to explore China’s censorship system from both technical and sociopolitical perspectives, offering a more holistic understanding than I would have gained from either project alone. The technical work gave me insight into the architecture and functioning of censorship algorithms, as well as the creativity of citizens who subvert them through language. This work helped me recognize the significance of resistance cultures and their adaptability, which was helpful as I further investigated the ethical dilemmas of AI-censorship during COVID-19 in my STS research. My STS project helped frame my technical work within a larger ethical and geopolitical context, reminding me that technology is never neutral—it reflects the values and intentions of its creators. Studying AI-censorship as both a social and technical system revealed the interconnectedness of design decisions, political power, and individual agency. This dual perspective not only strengthened both projects but also encouraged critical reflection on how technology can be used for both control and empowerment.

Degree:
BS (Bachelor of Science)
Keywords:
AI-censorship, China, COVID-19, digital authoritarianism
Notes:

School of Engineering and Applied Science

Bachelor of Science in Computer Science

Technical Advisor: Rosanne Vrugtman

STS Advisor: MC Forelle, Coleen Carrigan

Technical Team Members: Kyle Tran

Language:
English
Issued Date:
2025/05/08