Enterprise Risk Management of Artificial Intelligence in Healthcare

Author: ORCID icon orcid.org/0000-0003-0475-8233
Moghadasi, Negin, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Lambert, James H., Systems Engineering, University of Virginia

Artificial intelligence (AI) is increasingly being adopted across technology domains, including healthcare, commerce, economy, energy, environment, trust and cybersecurity, transportation, etc. However, system owners, experts, regulators, developers, and other actors describe concerns regarding the risks associated with AI applications. This dissertation develops a framework for management of risk, cost, and schedule in AI applications in enterprise systems, focusing on healthcare technologies. The framework combines risk analysis and systems modeling with an understanding of recent AI healthcare applications. A risk register, which includes the Purpose (Pi), Structure (Sig), and Function (Phi) characteristic layers of a system, serves as the foundation of the framework. The proposed method identifies success criteria, research and development initiatives, and emergent conditions of AI healthcare systems within each layer. The outcomes have insights into the requirements and policies for healthcare organizations that are prioritizing initiatives and tracking potential disruptions. To demonstrate the framework, three cases of scenario-based disruption of priorities are described across three systems modeling layers: First, an analysis of hospital priorities is developed in the Purpose (Pi)/sector layer; this tracks the most disruptive system stressors. Second, an AI-assisted design optimization of a vascular anastomosis device is developed in the Structure (Sig)/device layer; this avoids costly physical experiments. Third, an analysis of AI-based diagnosis of cardiac sarcoidosis using multi-chamber wall motion is developed in the Function (Phi)/disease diagnosis layer; this avoids waste in programming examinations and procedures. Various eXplainable AI (XAI) techniques are then employed to interpret the outputs of the second and third cases. These techniques aid in improving communication between AI systems and non-technical users, enhancing understanding of AI outputs, reducing distrust in the AI results, and assisting in data evaluation. In addition, the framework is extended to quantify the dynamics of the system layers using resilience curves of order disruption. This scale-free quantification of resilience allows for the deployment of the framework across various application domains.

PHD (Doctor of Philosophy)
Risks of AI, AI in Healthcare
Sponsoring Agency:
Commonwealth Center for Advanced Logistics Systems National Science Foundation Center for Hardware and Embedded Systems Security and TrustUVA Engineering Endowed Dean’s Fellowship (2023-2024)
All rights reserved (no additional license for public reuse)
Issued Date: