Implementing Fairness Constraints For Bias Mitigation in FRT; Consequences of Biased FRT on Disadvantaged Communities

Author:
Scarlatescu, Matthew, School of Engineering and Applied Science, University of Virginia
Advisor:
Wylie, Caitlin, EN-Engineering and Society, University of Virginia
Abstract:

The broader issue that both my technical and STS research papers tackle is that racial bias exists in facial recognition technology (FRT) and that bias can have serious negative consequences for marginalized groups. Some examples of these consequences are: wrongful arrests on the basis of faulty facial recognition matches, and racist image labelling by social media algorithms–both of which have happened. My technical research proposal relates to the general problem as it aims to mitigate bias in existing models through technical means. My STS research relates to the general problem with its goal of understanding the causes of biased model creation, analyzing real-world examples of bias in FRT and resulting consequences, and what can be done to prevent biased models from being created in the first place. My technical research investigated the problem of FRT being more inaccurate at identifying faces of people of color and other minorities compared to caucasian people. What technical methods can we use to help mitigate racial bias in FRT models? My technical proposal uses adversarial training–which involves training machine learning models with data points specifically chosen to challenge a model’s biases–to help balance machine learning models in other cases in the hopes of closing the gap in model accuracy across different racial demographics (Yang, et. al., 2023). Although my technical research is only a proposal, existing research does show that adversarial training is known to be an effective strategy for debiasing machine learning models and increasing model robustness. Therefore, the strategies I discuss in my technical paper would almost certainly help mitigate bias in FRT models. My STS research investigated the social causes of racial bias in FRT models, real-world impacts such as cases of wrongful arrests, and proposed some solutions that could eliminate the creation of biased FRT models in the first place. My paper analyzes specific cases such as datasets composed of images scraped from the internet without user consent, the wrongful arrest of Robert Williams (Hill, 2020), Amazon’s Rekognition which was being used by law enforcement agencies–being found to have significant racial bias (Snow, 2018), and the Project Detroit Green Light–a surveillance program in Detroit that mainly policed black people (Detroit, 2019). In my analysis of these cases, I use Algorithmic Accountability, an ethical framework that places the responsibility for responsible creation and use of AI on developers (companies creating them), and customers like law enforcement agencies that deploy models in real scenarios (Horneber & Laumer, 2023). I also advocate for federal regulation of FRT usage and mandated strict testing for model bias. With analysis of these cases, supported by information from scholarly sources, my STS research paper proposes solutions to help mitigate bias in FRT and its consequences. Moving forward, researchers could build on my technical proposal by implementing my suggested solutions and testing models before and after the solutions have been implemented. My STS research provides a solid foundation for understanding what causes racial bias in FRT. Both my technical and STS research provide valuable insight into this issue. While they take different approaches to the problem of racial bias in FRT (since my technical research focuses on mitigating bias in existing models through technical means, while my STS research aims to eliminate the creation of biased models through social reform), they both still aim to achieve the same goal of creating fair FRT that is used equitably.

Degree:
BS (Bachelor of Science)
Keywords:
bias, facial recognition, AI
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2025/05/06