Synthesis of Software Development and Database Systems; Explainability in AI for the Purpose of Contestability

Author:
Taylor, John, School of Engineering and Applied Science, University of Virginia
Advisors:
Graham, Daniel, EN-Comp Science Dept, University of Virginia
Wayland, Kent, EN-Engineering and Society, University of Virginia
Abstract:

A common problem with many Artificial Intelligence systems is a lack of transparency
about their internal models which makes it hard to explain the results of these systems. This lack
of explainability is a significant issue since these systems’ decisions can have significant impacts
on real people. These systems are not infallible, and it is important that measures are in place for
humans to step in and correct mistakes made by AI. These systems cannot be fairly contested if
they can not be explained. My STS thesis examines how humans view AI decisions and what it
means for these systems to explainable and contestable. My technical paper is a review of the
University of Virginia Engineering computer science curriculum and how it can be improved. It
focuses on a lack of real-world development experience in the first few years of the curriculum.
It is only loosely coupled with the STS paper in that improvements to UVA’s curriculum could
better prepare students to address problems such as the explainability of AI.
For my STS thesis I looked to determine what it means for AI to be explainable and
contestable as well as current strategies for addressing the problem of explainability in AI. I
looked at surveys of AI experts about what is needed for systems to be contestable and found
that explainability is an important factor when it comes to contestability. In my review of the
literature, I found that the property of explainability is dependent on who the system is being
explained to. I found multiple models that can be used in AI systems to increase explainability
but there is always a tradeoff between the power of a model and its explainability. I also found
that there is research about how to simplify models without losing functionality. There are also
tools that can be used to predict the weights given to different inputs into a model to provide
more explainability.
2
For the technical capstone I performed a review of all the classes I took for my CS
degree. I examined the topics covered in each as well as the types of assignments completed in
each class. I found that students can make it to their 5th or 6th semester without encountering
any large scale development projects that resemble the type of work that they will encounter
after leaving college. These projects are necessary for students to gain confidence in their ability
to take the information they are learning in class and apply it in a useful way. I offered some
potential changes to the curriculum such as recommending that Advanced Software
Development be taken earlier in students time at UVA and potentially making Database Systems
a required class. These changes would get students more development experience sooner in their
time at UVA.
I believe that my work on my STS thesis is successful in explaining the importance of
explainability in AI. It shows why contestability is important and why explainability is a
requirement for contestability. It does not provide any new techniques for addressing the
problem of explainability but it provides an overview of the field of explainable AI and gives
developers important perspectives to consider when they are determining how to construct AI
systems that will have significant impacts on individual. My technical paper provides insight into
a gap in the CS curriculum and some of the changes I recommended are similar to the changes
made in the new CS curriculum.

Degree:
BS (Bachelor of Science)
Keywords:
Thesis Portfolio
Notes:

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Daniel Graham
STS Advisor: Kent Wayland

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2022/05/13