MDP-Based Adaptive Motion Planning for Autonomous Robot Operations Under DegradedConditions

Author:
Seaton, Phillip, Computer Engineering - School of Engineering and Applied Science, University of Virginia
Advisor:
Bezzo, Nicola, EN-Eng Sys and Environment, University of Virginia
Abstract:

Autonomous mobile robots (AMR) like ground and aerial vehicles may encounter internal failures and external disturbances when deployed in real-world scenarios compromising the success of a mission. This thesis proposes an online learning method to adapt the motion planner to recover and continue an operation after a change in a robot's dynamics. Our proposed framework builds on the Markov Decision Process (MDP) and leverages the residual - defined in this work as the difference between the predicted and the actual state - to update the transition probabilities online and in turn update the optimal MDP policy. To maintain the system safe during learning, we propose a chi-squared-based dynamic learning rate that is event-triggered when the robot approaches an unsafe region of the workspace. Our framework can also distinguish between external disturbances versus internal failures by tracking the robot's state in a local and fixed frame view. We finally propose a state-machine-based resetting procedure to return to a previous MDP model when the problem disappears. This framework for resilient planning of impaired vehicles is validated both in simulations and experiments on unmanned ground vehicles (UGV) in a cluttered environment. Finally, we show an extension of our framework for multitask cooperative missions in which robots need to balance tasks based on the impaired dynamics of the robots in the network.

Degree:
MS (Master of Science)
Keywords:
Motion Planning, Fault-tolerant Planning, Markov Decision Process, Unmanned Ground Vehicles, Robotics
Sponsoring Agency:
DARPANSF
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2021/12/13