Violence Detecting Machine Learning System; An Analysis of Bias and Privacy Concerns in Violence Detecting Surveillance Systems
Damenova, Diana, School of Engineering and Applied Science, University of Virginia
JACQUES, RICHARD, EN-Engineering and Society, University of Virginia
Morrison, Briana, EN-Comp Science Dept, University of Virginia
Introduction:
In the summer of 2020, my friends and I joined an artificial intelligence hackathon in which the challenge was to create an artificial intelligence project that would have some form of net positive social impact. My team created a machine learning, image classifying algorithm that can detect violence in an image with a 62% accuracy rate. This algorithm would be linked to law enforcement dash and body cameras to automatically detect violence, then notify a third party to send for extra help and automatically back up the footage to the cloud. This is in the hopes of making law enforcement interactions safer and holding all parties accountable. This project is the topic of focus of my capstone technical report. The topic of my STS research paper is evaluating how effective this project would be at solving the very complicated social problem of lack of law enforcement accountability and safety. Thus, the two projects are highly related, and both depend on each other in understanding the hackathon project’s real efficacy. This is in terms of understanding its social and societal impacts, as well as understanding its technical nuances and validity.
Project Summaries:
My capstone technical report covered the project the aforementioned hackathon project. My group, during a hackathon decided to program Google’s MobileNet V2 image-classifying machine learning model to detect violence in frames of video footage. This algorithm ran locally on smartphone devices with live footage being sent to the smartphone with an Arduino for every 30 frames. The goal was to design a system that could detect a violent altercation in real-time, immediately store the footage in the cloud, and send the footage to a third party for investigation. The result was detection with a 62% accuracy. Our project won first place and grand prize in the hackathon. If this project were to be continued, it could be worthwhile to introduce multiple camera feeds to the system, fully implement the cloud technologies to the system, and test the algorithm on actual dash camera footage, which is of worse quality and may decrease the accuracy of the algorithm.
My STS research paper explores the social implications of implementing such a solution on a national scale. The central research question this STS exploration will answer is: to what extent do the positive impacts outweigh the negative social impacts of implementing a wide-scale violence detection surveillance system? The negative social implications are evaluated through the two lenses of bias that exist in machine learning algorithms and privacy concerns of increased public surveillance. The positive social implications are evaluated through the lens of the benefits of accountability and scalability of this sort of technology. A literature review is conducted of literature that evaluates similar technologies and solutions to the social problems of those technologies. In addition, there is an analysis of case studies and research conducted on the artificial intelligence surveillance systems already implemented in other public spheres like in London and Beijing. Ultimately, the research paper concludes that the negative implications outweigh the positive. This is primarily because the project creates more social problems of algorithmic bias and raises privacy concerns. The stakes of these law enforcement interactions are very high. This is because the outcomes of these interactions can impact people’s physical well-being, criminal records, careers, etc. Any algorithmic bias or social-technical issues, no matter how statistically small, can and will seriously detrimentally impact someone’s life. This is unacceptable and thus this type of technology should not be used to fix this problem in reality.
Conclusion:
Engineers often attempt to build technical solutions to extremely complicated social problems. To have any form of a positive impact, these kinds of solutions must be designed and executed with a considerable extent of social awareness and consideration. Without this type of social awareness and consideration, technological solutions are at risk of doing more harm than good.
While it is an interesting and worthwhile endeavor to understand the technical nuances of such a solution, it is necessary to understand its social efficacy. In this situation, it was discovered that the proposed solution could have many limitations that would not outweigh its benefits. From bias to privacy concerns, the many deep limitations are not worth it when the stakes of law enforcement altercations are too high. Through this entire process, I have learned that it is an especially important skill for an engineer to understand what kinds of problems they can or cannot solve. Law enforcement accountability in storing footage, and in general, is a ridiculously complicated issue that no single piece of technology can solve. Internalizing the limitations of technological systems is something that needs to occur more often, because it is very often that the technology in existence can do more unintended harm than good.
BS (Bachelor of Science)
English
All rights reserved (no additional license for public reuse)
2023/05/11