Interpretable Monitoring for Self and Socially Aware Mobile Robot Planning

Author: ORCID icon
Peddi, Rahul, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Bezzo, Nicola, EN-Engr Sys & Environment, University of Virginia

Autonomous mobile robots (AMR) are rapidly being introduced into our world in transportation, delivery, medical service, agriculture, and household applications. Their ability to reduce the burden on humans has made them viable and increasingly popular sources of productivity, but with their increased presence in our society, assuring that they behave in socially acceptable and safe ways is critical to their widespread integration and success. However, in many real-world applications, these robotic systems are subject to various uncertainties from different sources, such as the presence of dynamic actors like humans and other robots, or the presence of external disturbances and sensing/actuation faults. These uncertainties bring challenges to motion planning as they can cause the robot to behave in strange and unnatural ways, deviating away from their desired behaviors, towards humans, and potentially into unsafe situations. Many of these uncertainties appear during robot operations, and robots typically use reactive motion planners, which may not be agile enough to keep the system or its environment safe. More recently, robot planning has been achieved through learning-based methods, which may have proactive components, but these approaches use black boxes, making it challenging to understand why and how robots make certain decisions, which is critical to gain trust and fully integrate robots into our shared world.

This dissertation presents a set of proactive motion planning frameworks that promote social awareness and safety for autonomous mobile robots operating under uncertainties. The frameworks we develop monitor the future states of mobile robots and the nature of future interactions between robots and dynamic actors to improve and refine motion planning accordingly to a given scenario; whether it is a social navigation case study in which the robot must safely navigate in the presence of multiple humans, or if uncertainties are coming from different sources like sensing/actuation faults.

First, we introduce a Hidden Markov Model (HMM)-based predictive model that adds to traditional prediction methods by accounting for uncertainties in predictions to plan proactive motion for a robot in the presence of multiple humans. Explicit predictions, however, can be restrictive in environments with multiple actors. To address this challenge, we reduce the problem to a binary classification through the design of a Decision Tree (DT)-based interpretable monitor that is used to predict and explain future interactions with dynamic actors for proactive non-interfering motion planning. We further extend this monitoring approach to adapt and improve on the failure modes of a baseline reactive motion planner, while also re-integrating HMM principles to enable fast runtime updating, so that predictions and planning can improve as operations continue. To extend our work to handle highly dynamic and dense environments, we leverage the idea of attention within human-human interactions and design a deep neural network (DNN) based approach that predicts which actors are most important to consider as constraints within the framework of a model predictive controller (MPC). Finally, we show that the approaches presented in our work can extend beyond the social navigation case study, through work in recovery from failures for a robot under decision uncertainties. The techniques presented in this dissertation are validated through extensive simulations and experiment case studies with real unmanned ground and aerial vehicles navigating in the presence of humans.

PHD (Doctor of Philosophy)
Human-aware Motion Planning, Social Navigation, Motion and Path Planning, Collision Avoidance, Intention Recognition, Interpretable Monitoring, Autonomous Mobile Robots
Sponsoring Agency:
Defense Advanced Research Projects Agency (DARPA)National Science Foundation (NSF)
All rights reserved (no additional license for public reuse)
Issued Date: