Towards Probabilistic Reasoning for Autonomous Vehicles

Author: ORCID icon orcid.org/0000-0003-0448-4076
McCampbell, Ryan, Computer Science - School of Engineering and Applied Science, University of Virginia
Advisor:
Behl, Madhur, EN-Comp Science Dept, University of Virginia
Abstract:

Autonomous cyber-physical systems such as self-driving cars are increasingly becoming dependent on AI enabled methods for their perception, planning, and control tasks. Unfortunately, deep learning algorithms have been proven to be unreliable in presence of incomplete, imprecise, or contradictory data and adversarial attacks that exploit critical design flaws leading to untrustworthy results. Managing uncertainty is possibly the most important step towards safe autonomous systems. Modeling an autonomous vehicle’s unfamiliarity for a given dynamic scenario enables appropriate subsequent decisions to be made under such uncertainties.

We propose to develop a framework to characterize and quantify the uncertainty in the perception stage of an autonomous vehicle’s computation loop. Using Bayesian learning, we can quantify the confidence the AV has in its scene understanding outputs. Using this framework, we can also detect when the autonomous vehicle is operating outside of its operational design domain (ODD). Mistakes by lower-level AI components can propagate up the decision-making process and lead to devastating results. In such modular autonomous systems, we can use probabilistic reasoning in low-level components and make safe, and reliable high-level decisions given this uncertainty information.

In this thesis, we first provide motivation for the use of Bayesian methods in autonomous vehicles, followed by some background on Bayesian networks in machine learning. Then we explore two case studies. The first is a simplified look at the capabilities of Bayesian neural networks on the basic MNIST image detection dataset. Then we explore a problem more relevant for autonomous vehicles, semantic segmentation of driving scenes, and examine the benefits of Bayesian neural networks for this task. We find that Bayesian neural networks can provide more reliable measures of confidence than standard softmax outputs and can enable us to detect inputs that are outside of our training domain.

Degree:
MS (Master of Science)
Keywords:
Bayesian Neural Networks, Autonomous Vehicles, Convolutional Neural Networks, Semantic Segmentation, Machine Learning
Language:
English
Issued Date:
2020/07/25