An Examination of Machine Learning Used in Facial Recognition Technology; Algorithmic Injustice: The Social Costs of Facial Recognition Technology in Policing
Plummer, Jacob, School of Engineering and Applied Science, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Morrison, Briana, EN-Comp Science Dept, University of Virginia
Murray, Sean, EN-Engineering and Society, University of Virginia
Facial recognition technology (FRT) has become embedded in both private and public life, influencing the way we manage security of our devices, how individuals are identified, and how law enforcement operates. The rapid adoption of such technologies has exposed systemic flaws, both in the design of these systems, as well as the institutions that use them. My capstone project, which focuses on mitigating racial bias in FRT through machine learning, complements my STS research into how the unregulated deployment of FRT has exacerbated racial disparities in policing. Together, these projects highlight how the technology made by computer scientists can have real world ethical implications, and that technical improvements alone are insufficient for resolving the disparities caused by FRT without reforms to policy, training, and oversight.
My capstone project addresses racial bias in FRT systems, particularly concerning the misidentification of darker skinned individuals due to biased training datasets. To reduce bias, I performed a meta study into two recent machine learning approaches with the goal of reducing machine learning bias of FRT. The first method utilized feature disentanglement, a method in which machine learning is used to isolate demographic attributes, such as race, gender, and age, from identity features during the training of the model. Models trained using this method had 17-31% reduced bias when predicting these attributes in a controlled setting. The second method used synthetic data augmentation, where artificial intelligence is applied to biased training datasets to supplement underrepresented groups in biased training datasets with high quality AI generated images. This technique improved the recognition accuracy for dark skinned individuals by 0.86%. While both solutions show promise in laboratory environments, real-world deployment is still a challenge due to factors such as inconsistent lighting and low-quality input images from surveillance cameras. Until these issues are addressed, we need to acknowledge the shortcomings of FRT and understand how technology affects systemic inequities. Even a “fair” system can still perpetuate harm if used without proper policy and safeguards.
My STS research explores how FRT’s adoption in law enforcement has influenced racial bias by performing a historical analysis on the history of facial recognition. This analysis focuses on three main problems. The first issue is that studies have shown the false-positive rates for black individuals to be anywhere from 10-100x higher than for white individuals, which has led to the wrongful arrests of several black citizens in recent years. These studies faced conflict with large corporations such as Amazon, which was keen to push their FRT to market. The second issue is that Federal agencies have been using FRT for years without even simple safeguards such as mandatory staff training or bias audits. Only 15 states in the U.S. have laws pertaining to the use of FRT in law enforcement. The third issue is that even in scenarios where law enforcement has these safeguards, such as only using a positive FRT identification for gathering information as opposed to probable cause for an arrest, they tend to circumnavigate protocol. Analyzing this issue through a historical lens highlighted the rate at which this technology has been adopted and the lack of response from the government on this issue.
When viewed together, the capstone and STS projects underscore the need for a dual-track solution. Continuing research efforts to eliminate technical bias and improve accuracy with low quality inputs will eliminate many of the issues with technology, though it is virtually impossible to create a perfect system. Because of this, implementing proper legislation to prevent wrongful arrest and the responsible use of FRT will enable his technology to be used as a powerful tool for good in law enforcement, rather than one of inequity. By combining ongoing technical advancements with clear legal standards and accountability, we can better ensure that facial recognition technology supports justice and public trust, rather than undermining civil rights.
BS (Bachelor of Science)
Facial Recognition, Machine Learning, Racial Bias
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Rosanne Vrugtman, Briana Morrison
STS Advisor: Sean Murray
Technical Team Members: Jacob Plummer
English
2025/05/10