A Meta-Study on Deepfake Detection Methods; The Influence of Deepfake Videos Intended for Entertainment on American Politics
Bilenkin, Sadie, School of Engineering and Applied Science, University of Virginia
Wayland, Kent, Engineering and Society, University of Virginia
Morrison, Briana, EN-Comp Science Dept, University of Virginia
Deepfake technology is often met by the public and the media with a sense of fear. Since deepfake technology was first created, journalists have tended to portray it in a negative light, cautioning readers of the threats it poses to society—spreading misinformation, eliciting shock or anger in groups, causing distrust in the credibility of videos as evidence. Though these predictions do not seem to have entirely come true in the present, the capabilities of deepfake technology that have been seen to this day raise the question of what power deepfake videos have over the public, and how it can be controlled. In recent years, deepfake technology has become more advanced and more widely available, making it both difficult for methods of deepfake detection to keep up with current deepfakes and more likely for internet users to encounter deepfake videos that may or may not be identified as fake. While some companies have written policies attempting to limit the ability of deepfake videos to cause harm by deceiving viewers, these policies tend to have aspirations beyond the capabilities of current deepfake detection technologies. Additionally, the impact of deepfake videos that do not intend to deceive viewers, such as those that are labeled as fake or are considered to be obviously fake, is not to be neglected. For my technical project, I studied the current state of deepfake detection technology. For my STS research paper, I analyzed the impact of deepfake videos intended for entertainment on American politics.
In my technical project, I studied different methods of deepfake detection that exist today and how, given their strengths and weaknesses, they could be useful to achieve the goals established by policies attempting to control deepfake videos posted online. The current deepfake detection methods can be divided into categories of active and passive, active meaning the detection method relies on having access to and being able to modify the original photo or video, and passive meaning detection given only the potentially deepfake photo or video. I studied various specific methods within each category, assessing their capabilities and limitations in order to determine overall how existing deepfake detection methods can best be used to reliably detect deepfake videos in the real world. Each of the passive methods had a weakness to some type or some quality of deepfake video; a few of the passive methods were less effective on videos that were compressed, for example. This limited the usefulness of these methods in the real world, where they would be used to detect a wide range of videos. The active methods came with drawbacks of their own, mainly that both of the active methods I studied required the original image or video to be watermarked in order to detect a deepfake of it. I concluded that the best use of existing deepfake detection technology would be to watermark photos or videos posted on a particular social media platform if that platform wants to use a watermark-based active detection method, and to use multiple different passive detection methods when the original image or video was not available to be watermarked.
My STS research paper explored the impact of deepfake videos meant for entertainment on American politics. I collected and analyzed the top 100 comments from a popular deepfake video that was made with the intention of being entertaining and depicted President Joe Biden and former President Donald Trump respectively playing a drum and dancing to the song “Ievan Polkka.” The 100 comments mostly fell into categories of jokes, praise for the video, or a combination of both. For each of the comments, I assessed how likely it was that they could be interpreted to be making a political statement, and I found that 60 out of the 100 comments did not seem to have any possible political interpretation. Of the remaining 40, the comments ranged from jokes that could possibly have political interpretations to jokes specifically about either Biden or Trump that seemed to comment on them as political figures. From this I concluded that deepfake videos made to be entertaining that contain political figures cause some of the viewers to respond with political opinions. This is not necessarily particular to deepfakes, as comments on anything posted on the internet containing a political figure would likely include some political opinions about that figure, but what sets deepfake videos apart is their ability to obscure that anything political is being conveyed. Consciously or not, the creators of deepfake videos can convey their political opinions under the guise of entertainment or exploring deepfake technology.
Through my technical project, I gained a better understanding of the current state of deepfake detection technology. I would have wanted to have a more definitive answer of how best to use existing deepfake detection technology to fulfill the goals of the policies on deepfake videos, but no method or collection of methods is currently able to be that reliable in real-world scenarios. The watermarking method could have potential if reliable ways to watermark original images and videos could be established. My STS research gave me some insight into how viewers react to deepfake videos containing political figures, though I would have liked to find more examples of videos, specifically those that were presented as meant for entertainment but more clearly contained the political biases of the creator. Future research could include videos like those, as well as possibly a different type of analysis of the comments that categorizes their tone in addition to how political they are.
BS (Bachelor of Science)
Deepfake, Deepfake detection
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Briana Morrison
STS Advisor: Kent Wayland
English
All rights reserved (no additional license for public reuse)
2024/05/09