Removing Gender Bias in Word Embeddings: Literature Review; Perceptions of Demographic Bias in the Natural Language Processing Academic Community

Author:
Wilson, Julian, School of Engineering and Applied Science, University of Virginia
Advisors:
Wayland, Kent, EN-Engineering and Society, University of Virginia
Graham, Daniel, EN-Comp Science Dept, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Abstract:

Natural language processing (NLP) provides a set of powerful tools to make lives easier and better, but NLP applications are susceptible to demographic biases and can learn to behave differently based on demographic characteristics such as gender, race, or age. These biases are harmful in several ways. Biased applications can propagate negative stereotypes about social groups, such as suggesting a doctor is probably male. They can also provide different resources to different groups, such as not recommending women’s resumes. Additionally, biased tools can make technology less accessible to some, such as marking African Americans’ Tweets as hate speech. My sociotechnical and technical projects summarize and analyze how researchers are addressing demographic bias in NLP in order to demonstrate the current limitations of the technology and predict how it will evolve in the future. I hope my work will help increase recognition of this issue and facilitate appropriately careful use of NLP so that it benefits everyone equally.
My sociotechnical report explores how the academic natural language processing community perceives and reacts to demographic bias and how these interactions have developed over time. I found that recognition of the problem of bias within the academic community has developed only recently. I collected all published papers related to NLP and selected those which mentioned bias in their title. I observed a steady increase in both the number of papers and the proportion of all papers in NLP that specifically address bias; in 2016, just 8 papers (0.19%) focused on bias, while in 2021, 118 papers (1.72%) did. These papers include a diverse set of approaches, but there has been criticism, particularly of how most researchers frame bias and justify their work. However, both papers and workshops responded to these criticisms in newer work. The scope of research is currently limited; most papers only study English and either race or gender bias. I concluded that the NLP academic community is growing more aware of the importance of demographic bias but still has significant work to do in addressing it.
My technical report summarizes the state of research in removing gender bias from word embeddings. I wanted to evaluate how effective existing approaches are at actually counteracting bias. I separated debiasing methods into two broad categories: removing gender information from existing embeddings and retraining models to learn gender-neutral embeddings. The first category is less disruptive to applications which already use the existing embeddings but is more restricted. Bolukbasi et al.’s hard debias and Wang et al.’s double-hard debias fall into the category. The second category is more expensive in time and resources but has greater freedom and includes Zhao et al’s GN-GloVe model. Kumar et al. developed the RAN-GloVe model which had the lowest Gender-based Illicit Proximity Estimate and one of the smallest gender biases on the WinoBias dataset. From my analysis, I found that comparing the performance of different models is difficult. Therefore, I suggest creating standard tasks and metrics for gender bias so that different approaches can be easily compared.
I am adequately satisfied with the results of my work this year, although both my sociotechnical project and technical project turned out to be significantly more limited in scope than I originally proposed in my prospectus. This is because as I did more research, I realized that my topics were too broad to be the subject of this thesis. Going forward, there is a lot of room for further work. My sociotechnical project analyzes how academic researchers interact with the problem of bias, but other social groups also have significant influence. Future topics of interest may include how bias is framed in journalism or efforts to legislate machine learning technology. My technical report focuses on efforts to remove a specific form of demographic bias (gender) in a small subset of NLP technology used (word embeddings). Other research might look into attempts to reduce different kinds of bias in data sets or model selection.

Degree:
BS (Bachelor of Science)
Keywords:
Natural language processing, Bias
Notes:

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Daniel Graham, Rosanne Vrugtman
STS Advisor: Kent Wayland

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2022/05/13