If You Give a Mouse a Vulnerability: How the History of Malware Informs the Future of DeepFakes; The Age of Recommendation Systems: Examining Social Risks of Algorithmically Tailored Content
Schaefer, Kelly, School of Engineering and Applied Science, University of Virginia
Seabrook, Bryn, EN-Engineering and Society, University of Virginia
Evans, David, EN-Comp Science Dept, University of Virginia
An ethical computer scientist considers more than the most efficient pattern of ones and zeros. In order to facilitate a broad exposure to sociotechnical issues in computer science, the two projects in this portfolio explore risk mitigation procedures for distinct applications of machine learning systems. The Capstone project surveys the current state of DeepFake generation, detection and mitigation technology. The technical analysis utilizes the precedent of malware production and detection to inform analysis around the future of DeepFake technology. Technical exploration of DeepFakes was motivated by contemporary relevance and personal interest in the topic. The themes expounded upon in the STS research paper delve into user identification and mitigation of social risk in algorithmic recommendation. Given that recommender systems have become popular in industry work, reflection on this topic holds pertinence. Research into impacts of recommender algorithms provides insight into ethical development considerations that will be carried forward into future engineering work.
On Wednesday March 16th, 2022, a group of hackers broadcast a fake video of Ukrainian President Volodymyr Zelenskyy informing soldiers to lay down their weapons and surrender to Russian forces (Allyn, 2022). The application of deep-learning based forgery to political manipulation exemplifies a threat of this type of content manipulation, known as DeepFakes, that has raised concerns since applications of deep-learning based face-swaps became popularized in 2017 (Aubé, 2017). The quality of the Zelenskyy DeepFake was not state of the art. It contained visual and auditory artifacts that allowed users to easily identify the video as fake. Most prominently, viewers referenced the inaccurate accent of the video audio. However, more sophisticated DeepFakes are not easily distinguishable from authentic content. In order to engage in academic discourse on the contemporary threat of DeepFakes, this technical paper overviews current DeepFake generation and detection methods, elucidates countermeasures, and summarizes the current performance of generation and detection. Once the technical background in contemporary DeepFake technology has been established, the paper draws on parallels to the race between malware generation and malware detection to inform technical predictions around the future trajectory of DeepFake generation and mitigation.
In the contemporary digital environment, many software applications tailor content and recommendations towards traits of an individual user. This is motivated by a goal of convenience; it allows users to frictionlessly interact with relevant, familiar media. However, it is vital to ethically evaluate an environment where software, liable to bias and unanticipated behavior, mitigates user interaction with the wider world. It is also critical to consider the conglomerate effect of narrow, tailored media on user identity and perception of the world. The STS paper answers how users and institutions identify and mitigate psychosocial impacts of recommender systems. This exploration is conducted through a framework of Risk Society, first proposed by German sociologist Ulrich Beck. The contention in risk analysis between the priorities of big technology corporations and the values of everyday users informs dialogue on user awareness, algorithmic transparency, and representation. The paper considers themes of machine learning bias, polarization, filter bubbles and equity to underscore the need for active consideration of design ethics and social effects in software development. This research contributes a user-centric perspective on the way recommender systems interact with societal structures to shape user behavior and beliefs.
While the technologies explored in the Capstone project and the STS research paper are distinct, overlap in sociotechnical context yields insight into the interaction of forces such as user psychology, DeepFake media, personalized content, and digital information infrastructure in emerging social threats. Society has seen increasing polarization in recent years (Pew Research Center, 2017). Confirmation biases causes individuals to have a predisposition to trust information that aligns with their existing viewpoints. Some identified attack surfaces of DeepFake media, such as disinformation and political manipulation, are amplified by pretexts of polarized confirmation bias; the danger of DeepFakes lies in their believability. Personalized algorithms have the potential to amplify or reduce confirmation bias effects in user media consumption. While technical mitigation measures have been proposed to reduce the power and spread of DeepFake content, addressing the societal pretexts that foster an environment for DeepFake harm requires a broader scope. Designing recommender systems that address the social risk of polarization is an example of one step to mitigate an environment where individuals do not experience shared truth. Additionally, the accountability that social media companies adopt for the content distributed on their platforms is relevant to both addressing DeepFake content and reducing social risks of recommender system technologies.
Allyn, B. (2022, March 16). A Deepfake Video Showing Volodymyr Zelenskyy Surrendering Worries Experts. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
Aubé, T. (2017, February 13). AI, DeepFakes, and the End of Truth. Medium. https://medium.com/swlh/ai-and-the-end-of-truth-9a42675de18
Pew Research Center. (2017, October 5). The Partisan Divide on Political Values Grows Even Wider [Report]. Pew Research Center - U.S. Politics & Policy. https://www.pewresearch.org/politics/2017/10/05/the-partisan-divide-on-political-values-grows-even-wider/
BS (Bachelor of Science)
Recommender Systems, Societal Bias, Risk Society, DeepFakes, Machine Learning, Polarization
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: David Evans
STS Advisor: Bryn Seabrook
All rights reserved (no additional license for public reuse)