Examining Algorithmic Fairness in Loan Classification Models; The Rise of AI in Credit Risk Assessment for Loan Approval: Machines are Not as Objective as They Seem
Ganapolsky, Sophie, School of Engineering and Applied Science, University of Virginia
Nekipelov, Denis, AS-Economics (ECON), University of Virginia
Laugelli, Benjamin, EN-Engineering and Society, University of Virginia
My technical report and STS research paper are tightly connected, as they both focus on racial bias in machine learning models for credit risk assessment in the context of loan approval. This form of disparate treatment has existed long before the automation of the loan approval process and is a topic of great controversy. With the rise of artificial intelligence systems in recent years, it is crucial to consider how the transfer of decision-making from humans to machines will impact those who have been historically disadvantaged in these decisions. My STS research paper explores this topic from a pure research perspective, examining this technology through the lens of Technological Politics. On the other hand, my technical report is a product of applied research, involving the development of various machine learning models and the usage of bias mitigation techniques for loan approval.
In the technical report, I develop and provide an overview of machine learning models for mortgage loan approval using the extensive data provided by the 2017 Home Mortgage Disclosure Act dataset. Additionally, I apply various algorithmic fairness techniques–such as equalized odds postprocessing–to the models and discuss how this impacts the racial bias and overall performance. The goal of this project is to provide insight on how machine learning algorithms incorporate the racial bias in training data into decision-making and review and suggest methods for mitigating such bias, so that those who develop and employ this technology can avoid creating greater economic inequality among racial groups.
The STS research paper has a similar goal of providing machine learning engineers of credit risk assessment algorithms for loan approval with an awareness of this technology being inherently political. Using the Technological Politics framework, I argue that these algorithms intentionally and unintentionally perpetuate racial discrimination and economic inequality, as they reflect and shape historical power-based relationships between Black and White Americans. To support this claim, I demonstrate that such machine learning models reflect the bias in the training data and discuss how the design decision by lending institutions to use uninterpretable models creates a lack of transparency that can negatively shape power relations between the two racial groups.
Because the STS research paper and technical report topics are very similar, I was able to gain a deep understanding of the societal implications that these algorithms may have, as well as the difficulty of creating algorithmically fair models for this task, considering the significantly biased data and the tradeoffs produced by bias mitigation techniques. Many of the sources that I encountered when searching for evidence for my STS research paper also aided in completing my technical report, as these sources enhanced my knowledge on the types of classification algorithms that are commonly used for this task, exposed me to the concept of algorithmic fairness, and helped me find relevant datasets and coding packages. Completing both projects has prepared me well for my future career in data science, allowing me to develop my technical skills and my ability to apply ethics to engineering.
BS (Bachelor of Science)
Algorithmic Fairness, Machine Learning, Artificial Intelligence, Classification, Algorithmic Bias
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Denis Nekipelov
STS Advisor: Benjamin Laugelli
Technical Team Members: N/A
English
All rights reserved (no additional license for public reuse)
2025/05/09