Optimization of Sustainable Energy Systems through Data Engineering; Bias Artificial Intelligence in the Policing industry and the impacts on minority communities

Thomas, McKayla, School of Engineering and Applied Science, University of Virginia
Morrison, Briana, Computer Science, University of Virginia
Foley, Rider, Engineering and Society, University of Virginia

Artificial intelligence is becoming a more vital aspect of computing. AI systems are implemented in many critical industries such as policing, government, and education. AI systems have many benefits. These benefits are centered around lessening the time required for routine activities and reducing the workload on working professionals. But AI systems are known to have a large bias against minority communities. This often happens due to the overrepresentation or underrepresentation of marginalized communities in data. This issue developed from the fact that AI is fed data that consists of societal or personal discrimination. There are no strict laws to prevent companies from implementing biased AI systems, therefore prejudice AI could further exacerbate the existing struggles of minorities. I witnessed data bias first hand during my summer internship. Once I recognized the trends of bias in my own dataset, I was able to apply counteractions to create an inclusive dataset.

During my summer, I interned at a sustainability energy company. The goals of the company were focused on optimizing the purchasing and transportation of energy resources throughout the entire United States. I was tasked with designing and implementing a renewable energy dataset. My technical report describes the research I conducted and the development process of the dataset throughout my internship and my STS research paper is an analysis of the effect of biased AI systems implemented in the policing industry on minority communities.

When working on my tasks for my internship, I used annual Energy Information Administration (EIA) filings to gather the information I needed to develop the Django models. The models corresponded to the fields in the filings. In order to properly create the Django models, I had to read through numerous spreadsheets. In addition, this was my first time implementing an entire dataset to an existing code repository which led me to researching the proper way to design and organize datasets. When I was researching, I noticed how bias can tremendously alter the data. My course of action following my research consisted of having many people from different backgrounds review my work. In addition, I request an in-depth testing process of the application. This process included extensive and inclusive beta testing for the application. If a certain group struggled with aspects of the application, I would alter it to include their perspectives. The development process of the application and dataset gave me the idea to write my STS research paper on bias effects on unrepresented groups.

My STS research topic is a deep dive into the complex effects of bias within the policing industry. The two artificial intelligence technologies my research paper focuses on are facial recognition and predictive policing. Facial recognition is face analyzer software. The software works by identifying human faces in images or videos. Facial recognition can determine if the face in separate images belongs to the same individual or search for a face in large databases. This is the technology where minority groups are underrepresented. This means that the software does not properly recognize minority features. This leads to wrongful people being wrongly detained or accused of a crime they did not commit. Predictive policing uses datasets to predict if a person is likely to commit a crime. Once this is established, the person is added to a watchlist. This could lead to over policing of minority communities. When bias is factored in, this technology could permanently alter an individual's life. Because AI is implemented in such a crucial area it should not operate under discriminatory practices.

Through the development of my STS research topic, I realized what area of implementation for AI I wanted to focus on. I decided on AI usage in the policing industry. Once I narrowed my topic, I set goals for the paper. My goals were to find common usage of artificial intelligence in policing, understanding the faults of bias within the technology and properly communicate my findings. In the beginning of the process, I struggled with finding the needed information for the research paper. There were plenty of journals, articles, and papers about AI but not specifically in the policing industries. But I was able to use the resources available at UVA to assist me in finding the needed information. I was able to conclude that biased AI is extremely harmful for minority groups. This causes over policing and wrongful detainment. In addition, biased AI will only further the divide between the minorities and the police. At the end of the process, I felt I accomplished my predefined goals.

BS (Bachelor of Science)
Artificial Intelligence

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Briana Morrison
STS Advisor: Rider Foley
Technical Team Members: McKayla Thomas

Issued Date: