Deepfake Detection Algorithm Evaluation: Deepfakes and Their Potential and Current Threats to the Political and Legal System

Author:
Spana, Carl, School of Engineering and Applied Science, University of Virginia
Advisor:
Bloomfield, Aaron, Computer Science, University of Virginia
Abstract:

My technical project and STS research paper are connected to the artificial intelligence technology Deepfake. The technical project will involve the independent testing of the Deepfake detection algorithms that were proposed in Facebook’s competition on videos generated from single images. My STS research project highlights some of the latest advances in Deepfake generation and detection, how this technology is currently being utilized and how it might be in the future, relations to ethical frameworks, and proposes solutions from both a technical and legal perspective. The STS research paper explores things from a higher level than the technical project, but according to research done I believe that single image to video translation is the most probable Deepfake generation type going forward and therefore detecting them is key to stopping the dissemination of fake information.
The STS research project is a high-level overview of the technology and how it interacts with society. I start with an in-depth explanation as to how Deepfakes are generated using images taken online, then how they can be detected. I also discuss specifics in detection rates, and how there are ways to manipulate data to thwart detectors. After, I discussed current and potential political ramifications of the technology, like spreading false information, manipulating election outcomes, and preventing women from accessing the political sphere. Next, I discussed the various challenges that Deepfakes pose to the legal system due to clashes with the First Amendment and political satire. Finally, I highlighted two potential solutions, one that tackles the problem from a legal perspective to helps victims more than current laws, and another that could aid in further detection of Deepfakes as AI improves through the usage of explainable AI. The research demonstrates the importance in understanding how black-box technologies like neural nets can have potentially negative impacts on society and emphasizes the inherent danger in pushing technology beyond their intended limits. It also discusses the importance of considering all stakeholders involved, as that is what allowed me to shape a legal solution that would give victims restitution and protect their dignity while simultaneously allowing the technology to continue to be used in consensual ways.
The technical report is important as there is relatively little work done in Deepfake detection in comparison to generation (some estimates say up to 100x more). Further, many of the reports I analyzed in my research were conducted using a small number of hand selected samples that could potentially bolster their detection rates in comparison to looking at real world videos. Facebook’s challenge did a good job in this regard, but more specific testing should be conducted going forward. Single face generation is by far the most likely form of random attack due to constraints with image availability and quality online. By generating Deepfakes in this style, we can get a better impression of how effective the best detectors are on a particular form of Deepfake, which depending on the results could shed insight into what the detectors strengths and weaknesses are. If possible, I would also like to use one of the newer forms of explainable AI developed by IBM but recognize that it may be beyond my technical knowledge to adapt algorithms to work within this new framework.
It was important to conduct the STS research prior to working on my technical project due to the complex nature of it and potential ethical concerns. First, I had to better understand the intricacies of how the technology worked and dispel some misinformation surrounding it. For example, one can generate a Deepfake using either a neural net or a GAN (Generative Adversarial Network). A lot of the literature discusses the topic like most algorithms use GANs, but through research I discovered that it is a very small percent. Another consideration was the ethical implications for the research project. I will have to make sure that Deepfakes generated for the technical project are developed either using myself or another consenting adult and should be deleted once finished. There were also potential legal considerations, as I was unsure if the usage of this technology was not allowed in the first place. Overall, the complex nature of the topic I chose meant that I wanted to fully research it before beginning the technical project to avoid potential ethical and legal conflicts, as well as carry out the technical project to the best of my ability.

Degree:
BS (Bachelor of Science)
Keywords:
Deepfake, Artificial Intelligence, Law
Notes:

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Roseanne Vrugtman
STS Advisor: Aaron Bloomfield
Technical Team Members: Carl Spana

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2021/12/17