The Effects of Clipping on Differentially Private Machine Learning; Technological Politics Within the Facial Recognition Technologies used by Law Enforcement that Shape the Dynamics of Power and Privilege in our Society

Author:
Patel, Kush, School of Engineering and Applied Science, University of Virginia
Advisors:
Laugelli, Benjamin, University of Virginia
Evans, David, EN-Comp Science Dept, University of Virginia
Abstract:

The technical piece of my research focuses on the process of securing machine learning algorithms. As machine learning becomes more and more prevalent in the face of consumers, there needs to be methods in place to ensure the personal and sensitive information is not exposed to hackers and other parties. The social piece of my research focused on the technological politics at play within the facial recognition technologies used by law enforcement that shape the dynamics of power and privilege in our society. My technical and STS projects both address aspects of the socio-technical problem preserving privacy when dealing with machine learning.
Specifically my technical work focused on forecasting different clipping thresholds that can be used to adjust noise and control exploding gradients during the training process of a machine learning algorithm. Through the methods of my experiment, I visited the idea of looking at ranges of fixed and dynamic clipping thresholds to validate the design of my solution. Where dynamic clipping can be defined as a trained attack where the clipping threshold is changed for every epoch. The experiment will analyze the test accuracy and attack success to support my claim for the adoption of my solution.
As mentioned earlier, the social piece of my research focuses on the events of the technological politics at play within the facial recognition technologies used by law enforcement that shape the dynamics of power and privilege in our society. I used the concept of technological politics to study the design architecture, engineering team composition, and federal regulations of the facial recognition systems used by law enforcement and government agencies in order to find out how these artifacts shape the dynamics of power and privilege that currently empowers certain groups while marginalizing and excluding others. Through the analysis of design architecture, it is clear that the heavy reliance on the dataset used in the training process for the machine learning algorithm is a key entryway for introducing unintentional bias into the facial recognition systems. Furthermore, non-diverse teams used during the development process for the systems coupled with loose federal regulations for monitoring the technologies used in law enforcement contribute extensively towards shaping the dynamics of power and privilege in our society. With this awareness in mind, the general reader will be more cognizant of this pressing issue present in our society.
I believe that working on the social and technical pieces of my capstone simultaneously greatly improved my understanding of the topics relevant to my research and industry. While these two topics focus on combating the same problem, they are still truly inherently different. While I was working on the technical portion, I was able to think about how the insights I drew could be used to solve the problem in practice. On the other hand, when thinking about the social influences of hospital data privacy, I was ruminating on how to solve the problem technically.

Degree:
BS (Bachelor of Science)
Keywords:
technological politics, machine learning, clipping, facial recognition, data privacy
Notes:

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: David Evans
STS Advisor: Benjamin Laugelli
Technical Team Members: Bargav Jayaraman

Language:
English
Issued Date:
2021/05/12