Risk Classification of Stereotactic Body Radiation Therapy applied to Thoracic Cancers; Analysis of Systemic Bias Introduced by Machine Learning Applications
Fishbein, John, School of Engineering and Applied Science, University of Virginia
Ferguson, Sean, EN-Engineering and Society, University of Virginia
Weaver, Alfred, En-Comp Science Dept, University of Virginia
Over the past several decades, the power of machine has become widely recognized. The versatility and vast capability of this family of technologies has been extensively researched and improved upon, but the scientific community only recently began to consider the social issues that can arise. As an aspiring machine learning professional, I sought to apply this advanced technology in a new way to attempt to solve a known problem in the field of radiation oncology. On the other hand, analyzing this technology from a sociotechnical perspective, I aim to present the issues of bias that can be introduced in machine learning applications and discuss how professionals can become more equipped to handle and prevent these issues moving forward.
In modern radiation oncology, many treatments generally consist of exposing a patient to high levels of radiation. There are many negative side effects that can arise from radiation exposure. With medical imaging technology, we can access the exact amount of radiation delivered to each coordinate of the patient’s body. This data has the potential to provide insights into the possible post-treatment complications that can ensue. In my technical capstone project, I aimed to take this data and use machine learning techniques to estimate the risk of a medical complication for a given treatment plan. The following technical capstone details the many steps taken to attack this problem and produce a model capable of making predictions given this data. Although the accuracy of this model must be significantly improved, we can conclude that this data is indicative of the problems that are seen. As such, we are optimistic that once a bigger dataset is collected, a useful model is inevitable.
Machine learning has become a very popular approach to many of today’s problems. This is in part due to the incredible potential of this family of technology, but it is also a function of its wide accessibility. In recent years, it has been proven that many modern applications of machine learning including Facial Recognition, Google Search/Ads, and many others contain different types of biases in their decisions. The following STS Thesis presents many distinct instances of the same issue. In the context of other professional work done in the field, I will go on to analyze this problem more deeply. Furthermore, with recent breakthroughs in the field of AI Explainability, I discuss a potential step forward in the context of preparing machine learning professionals to identify and prevent these issues.
Both the Technical Capstone and STS thesis that follow highlight the power of machine learning in modern society. Given the wide assortment of societal issues that can arise when this technology is employed at scale, engineers and other professionals in the field must be aware of these issues so that they can operate successfully in our technical society.
I would like to thank my technical supervisor Dr. Wijesooriya, my capstone advisor Dr. Weaver, and my STS advisor Professor Ferguson for all of the guidance they have given me through this process.
BS (Bachelor of Science)
Machine Learning Explainability, Algorithmic Bias
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Alfred Weaver
STS Advisor: Sean Ferguson