Capstone/Technical Report; STS Research Paper

Author:
Chen, Shenghui, School of Engineering and Applied Science, University of Virginia
Advisors:
Feng, Lu, EN-Comp Science Dept, University of Virginia
Odumosu, Toluwalogo, EN-Engineering and Society, University of Virginia
Jacques, Richard, EN-Engineering and Society, University of Virginia
Abstract:

In recent years, dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. Notable examples include autonomous driving, drones, medical assistive technologies. However, as these applications show great potential for improving the quality of life and more convenience for mankind, one also has to recognize the risks it brings. In particular, the commonly used deep learning modules in such autonomous systems are widely believed to be powerful despite inherently uninterpretable and complicated, making them black box models. In this thesis portfolio, I have conducted research on the technical front on improving the transparency of robotic planning systems through the generation of contrastive explanations, which was published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020. From a Science, Technology and Society (STS) perspective, I analyze the status quo, possible causes, and impact on different stakeholders of black box models in safety-critical autonomous systems, and demonstrated my argument through two case studies.

In this work, the technical paper delves deeper into the domain of Explainable AI (XAI), aiming at improving user understanding and trust of autonomous robotic planning systems by introducing new concepts and algorithms to generate contrastive explanations. The STS research provides the broader context of the technical work, outlining the problems and potential solutions of safety issues of black box models in autonomous systems.
The technical portion of my thesis first recognizes that providing explanations of chosen robotic actions can help to increase the transparency of robotic planning and improve users' trust. Also, we learn from literature survey of social science works that the best explanations are contrastive, explaining not just why one action is taken, but why one action is taken instead of another. We formalize the notion of contrastive explanations for robotic planning policies based on Markov decision processes, drawing on insights from the social sciences. We present methods for the automated generation of contrastive explanations with three key factors: selectiveness, constrictiveness, and responsibility. The results of a user study with 100 participants on the Amazon Mechanical Turk platform show that our generated contrastive explanations can help to increase users' understanding and trust of robotic planning policies while reducing users' cognitive burden.

In my STS research, I first introduce the status quo of the safety issues underlying in black box models of current autonomous systems. I apply the STS framework of Technological Momentum proposed by Thomas P. Hughes, arguing we are still in the first phase of deploying autonomous systems and our society still has the ability to steer the development towards a more safety-focused, privacy-oriented direction. Then I exemplify the problems of technically black box and socially black box models through case studies of autonomous driving bugs and Boeing 747 crashes. Finally, I analyze the stakeholders involved and their corresponding responsibilities on this issue.

Degree:
BS (Bachelor of Science)
Keywords:
Explainable AI, Formal Methods, Autonomous Systems, Black Box Models
Notes:

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Lu Feng
STS Advisors: Toluwalogo Odumosu, Richard Jacques
Technical Team Members: Shenghui Chen, Kayla Boggess, Lu Feng

Language:
English
Issued Date:
2020/11/29