Self-Correcting Ping Pong Ball Launcher; The Impact of Deepfakes on Misinformation and Society

Chang, Angus, School of Engineering and Applied Science, University of Virginia
Baritaud, Catherine, EN-Engineering and Society, University of Virginia
Powell, Harry, EN-Elec & Comp Engr Dept, University of Virginia

The field of artificial intelligence (AI) and machine learning (ML) is rapidly advancing, causing a growing fear of its potential applications. On the technological side, the report explores self-correction methods for a ping-pong launching robot. This development can be extended to autonomous weaponry, and it was important to see how far this kind of technology can go in order to understand its subsequent effects on society. The purpose of the STS research paper was to further define the effects of malicious AI usage. This research looked at deepfake videos, which are also a product of AI, and how they cause misinformation to propagate. Both autonomous weaponry and deepfake videos are made possible by the development of AI technology, and they both raise concerning questions about ethical usage (if any). The alarming factor is how accessible they can become in the near future.
By taking on the development of a self-targeting and self-correcting ping pong ball launcher, the technical research shows that this technology can be relatively easily put together even by undergraduate college students, albeit at a rudimentary level. Even the core self-correction of the system uses a framework called the Kalman Filter, which is freely available to study online. This adds to the credibility of claims that AI technology is becoming too accessible.
The technical team designed everything from start to finish, including the launching mechanisms and communication between software feedback systems and the accompanying microcontroller. The resulting system is able to target a specific area and successfully land a shot. On the small chance that it misses, then the self-correction system kicks in and makes adjustments to the initial aiming trajectory in order to get a successful shot the next time. Although there were limitations to the system which came from construction constraints, the main AI component was still fleshed out enough to be labeled as self-correcting.
Using a different example to examine more societal impacts, the STS research paper looks at how deepfake videos cause misinformation to spread and why that greatly affects the world in a negative way. The paper draws on real-world examples of deepfake videos being used maliciously, including newspaper reports and firsthand testimony. Another heavily-used source type was journal-backed research into the effects of misinformation. This includes studies on politically-motivated misinformation and how deepfakes specifically can enhance the dangers of misinformation and reduce trust in media institutions.
Finally, the STS research paper combines these pieces of evidence and lays out exactly how deepfakes will affect different levels of society, from corporations to politicians to even regular individuals. If allowed to develop unchecked and become more accessible, deepfake technology will negatively impact everyone. It is not some isolated threat that is limited to certain groups; the research has made clear that deepfake videos are advancing faster than the measures that can be used to detect or prevent them.
Although it may not seem like an urgent threat, the rapid development of AI technology means it will soon be capable of disruptive activities, if not already. The STS research paper has shown the potential impacts of one type of AI. The technical research has shown that this kind of technology is easily accessible and easily developed by anyone with some engineering experience. When combining the level of accessibility and capabilities of AI, it shows a need for preventative measures to catch up.

BS (Bachelor of Science)
Deepfake, Misinformation, Self-correction, Artificial Intelligence, Social Construction of Technology

School of Engineering and Applied Science
Bachelor of Science in Computer Engineering
Technical Advisor: Harry Powell
STS Advisor: Catherine Baritaud
Technical Team Members: Kai Barzdukas, Angus Chang, David Chen, Jacob Coughlin

All rights reserved (no additional license for public reuse)
Issued Date: