Towards Safer Autonomous Systems: From Vulnerability Mitigation to Safety Assurance

Author: ORCID icon orcid.org/0009-0000-5709-4683
Elnaggar, Mahmoud, Computer Engineering - School of Engineering and Applied Science, University of Virginia
Advisors:
Behl, Madhur, EN-Comp Science Dept, University of Virginia
Lin, Zongli, University of Virginia
Smith, James, EN-CEE, University of Virginia
Stan, Mircea, University of Virginia
Davidson, Jack, EN-Comp Science Dept, University of Virginia
Abstract:

Cyber physical systems (CPS) have become an integral part of our daily lives, from self-driving cars and autonomous delivery drones to industrial control systems. However, the safety of these systems remains a significant challenge due to the presence of cyber and system level vulnerabilities, unreliable wireless connectivity, and data-driven controllers. This dissertation proposes novel approaches as well as leverages existing ones in order to address the safety problem under three aspects.
The first aspect we address is the safety of vulnerable autonomous systems. Cyber security researchers have invented a myriad of techniques to protect against cyber attacks. With few exceptions, these techniques add runtime overhead to the system, which not only increases the time to complete any given task but more importantly might also put the systems under unsafe conditions when deployed to dynamical autonomous systems. We propose an adaptive algorithm that relies on model predictive control (MPC) to keep the system safe without taking unnecessarily conservative actions. We also consider system level attacks, i.e, a drone hijacking scenario where an attacker spoofs one or more of the onboard sensors of a drone to hijack it to an unsafe region. We propose an inverse reinforcement learning (IRL) based approach that predicts the intention of the attacker, determines the compromised sensor(s) and mitigates the attack.
The second aspect focuses on the safety of wirelessly connected autonomous systems. Connectivity between autonomous systems has multiple advantages in terms of performance and efficiency. However, for dynamical systems especially those operating in outdoor and urban environments, changes in the environmental context produce non-stationary effects on the wireless channel used for communication which might compromise the system safety. Our approach relies on a Baysian deep learning (BDL) model to predict the quality of the dynamical wireless channel in real time as well as the uncertainty of the model predictions. These predictions are key information needed by the control algorithms in order to take safe control actions that guarantee safety of the whole wirelessly connected system.
The third aspect of the dissertation addresses the safety of AI-controlled autonomous systems. We present a provably safe neural network (NN) filter that filters any unsafe actions produced by a data-driven Reinforcement Learning (RL) controller and guarantees that the system state will always remain inside a safe set. The approach comprises designing, verifying and synthesizing a control barrier function (CBF) based on a kinematics model of the system. The merit of the proposed approach is that it allows us to decouple the handling of CBF constraints from the control optimization task while having a computationally efficient yet provably safe control action filter.
The presented approaches are validated through case studies, simulations and experiments, demonstrating their effectiveness in ensuring safety of CPS under various scenarios. The research contributes to the field of CPS by providing comprehensive techniques to solve safety problems, which can be applied to various types of autonomous systems.

Degree:
PHD (Doctor of Philosophy)
Keywords:
Reinforcement Learning, Inverse Reinforcement Learning, Control Barrier Function, Cyber-Physical Systems Security, Autonomous Driving Safety, Model Predictive Control, Neural Networks Verification, Bayesian Deep Learning
Language:
English
Issued Date:
2024/04/19