Randomness-Based Behavior Monitoring for Resilient Autonomous Robot Operations
Bonczek, Paul, Electrical Engineering - School of Engineering and Applied Science, University of Virginia
Bezzo, Nicola, EN-Engr Sys & Environment, University of Virginia
Today’s autonomous systems, such as unmanned ground and aerial vehicles, are becoming integral in various aspects of our daily lives. With the help of technological advancements in sensing, actuation, communication, and computation, modern robots are capable of many civilian and military applications with minimal to no human supervision. However, with these increases in capabilities comes an increased risk of security vulnerabilities to cyber attacks or system faults that induce undesired system behavior. Thus, it is crucial to provide improved detection and recovery measures to ensure proper performance and safety. The objective of an attacker is typically to implement stealthy attack sequences to manipulate the system of interest, all while attempting to remain undetected. Although traditional attack detection techniques have been shown to be effective in mitigating attacks for resilient operation, they are susceptible to being fooled by intelligent attackers that are able to hide within the system noise profile. Furthermore, cyber attacks and/or faults that occur on a single vehicle within multi-agent systems can potentially compromise the performance of all vehicles, resulting in the failure of the multi-agent system from accomplishing an operation. To counteract these undesirable scenarios, the utilization of recovery frameworks in single- and multi-agent systems can ensure the safety of all agents while still being able to resiliently accomplish tasks.
This dissertation focuses on increasing the state-of-the-art attack detection capabilities on autonomous single- and multi-robot systems to discover previously undetectable stealthy attacks on the sensor and communication infrastructure. To this end, we present novel runtime randomness-based detection techniques to identify stealthy falsified sensor measurements and received information over communication channels that intentionally hide within system noise profiles, but leaves traces of non-random, inconsistent behavior. Additionally, we introduce an approach utilizing such randomness-based techniques to covertly pass safety-critical information within a robotic swarm through hidden signatures without the need to explicitly broadcast this information between robots. We also highlight two cooperative recovery frameworks within multi-robot systems to aid in re-localization of compromised robots that suffer from attacks or faults to critical positioning sensors within unknown or landmark-free environments. Lastly, we present a detection and recovery framework for individual autonomous systems that suffer from spoofed or faulty on-board controllers, which can drive the vehicle to undesired locations in the environment.
These techniques are validated through extensive simulations and proof-of-concept lab experiment case studies with single and cooperative unmanned ground vehicles. Current and future work includes the development of cooperative frameworks for teams of robots to defend safety-critical regions of the environment from malicious intruders.
PHD (Doctor of Philosophy)
Cyber-Physical System Security, Multi-robot Systems, Attack Detection and Recovery, Cooperative Recovery, Resilient Control, Autonomous Mobile Robots
National Science Foundation (NSF)Office of Naval Research (ONR)Defense Advanced Research Projects Agency (DARPA)
English
All rights reserved (no additional license for public reuse)
2022/12/11