Analyzing Biases in Visual Recognition Models

Author:
Ranjit, Jaspreet, Computer Science - School of Engineering and Applied Science, University of Virginia
Advisor:
Ordóñez, Vicente, Computer Science, University of Virginia
Abstract:

With the rise of deep learning models which require exceedingly large amounts of data, there exists a need to examine the biases that are reflected in the applications of these models. For example, a visual recognition model can learn image representations of cooking that are closer to the representations of women than men, thus reinforcing a negative gender stereotype of women being homemakers. This thesis explores and analyzes these biases across state of the art visual recognition models. Deep learning models are reliant on large amounts of annotated data in order to be trained. Annotated data is difficult to collect and is often aggregated from human annotators or scraped from the Internet. As a result, these large, publicly available datasets can reflect societal biases. Labeled datasets require annotations provided by human labelers, which will reflect their individual biases. Furthermore, these biases can propagate into the model during training and potentially be amplified and reflected in the predictions. With rising concerns of discrimination and bias in deep learning, it is imperative to investigate the fairness and equity of these systems for all users.

Current bias identification pipelines target the explicit predictions of a model, often overlooking the implicit feature representations that contribute to biased predictions. The goal of this research is to investigate and compare gender biases across visual recognition models by quantifying bias relationships at the feature representation level. This is accomplished by exploring metrics that are able to capture the spatial relationships among classes in the feature representation of a deep neural network, and investigating factors that contribute to biases with respect to classes of images that co-occur with different genders. This work demonstrates that the source of this bias can be better understood by comparing the trend of feature representations for a group of classes across visual recognition models with different objectives. The work presented in this thesis serves as an exploratory step for a bias identification pipeline that explores gender bias relationships beyond the explicit predictions made by a model. This work can be extended to exploring other societal biases such as racial and religious biases. With the release of many deep learning models that have been trained on millions of images, we hope the work presented in this thesis aims at providing more transparency in how these models represent gender and encode bias at the feature level.

Degree:
MS (Master of Science)
Keywords:
Gender Bias, Foundational Models, Visual Recognition Models
Language:
English
Issued Date:
2021/12/12