DEEPDIAG: USING AI TO IMPROVE DIAGNOSIS TIMES IN THE ICU; DEEPFAKES AND DEEPLIES: ADDRESSING THE THREAT OF MISINFORMATION ONLINE

Author:
Jahromi, Navid, School of Engineering and Applied Science, University of Virginia
Advisors:
Nguyen, Rich, EN-Comp Science Dept, University of Virginia
Baritaud, Catherine, EN-Engineering and Society, University of Virginia
Abstract:

Artificial Intelligence (AI) has seen tremendous progress in the last decade and is quickly being integrated into all major aspects of our society including medicine, education, and manufacturing. Focusing on the medical applications, the technical work explores how AI can be integrated with human-driven processes to improve efficiency and minimize the margin of human error in critical systems. Through the lens of a specific use case, we can illuminate the significant advantages that machine learning models have over human reasoning when it comes to interpreting a large scale of data points. The science, technology, and society (STS) work focuses on a drastically different application of AI through the lens of deepfake technology wherein doctored videos using AI can be used to manipulate and deceive a large amount of the public. Applying the Actor-Network Theory Framework, the STS topic attempts to make sense of the different social groups currently influencing the fate of this technology and identify areas of improvement in order to adequately address this growing threat. These loosely coupled explorations of AI come together to provide a comprehensive view of this rapidly expanding technology highlighting the duality and need for discretion as we continue to integrate AI into more aspects of our society.

The technical report goes in-depth about the creation of an interpretable machine learning model to improve the current diagnosis procedure for a potentially fatal bloodstream infection (BSI). The difficulty of physicians to have to interpret a wide array of clinical information from a patient's vital signs and arbitrarily decide when to take a blood sample can oftentimes lead to a substantial delay in a patient receiving the treatment they desperately need. The proposed machine learning model would analyze a patient's trends over time and provide a risk score to trained medical staff acting as an early warning sign for them to identify and begin treating patients suffering from a BSI. The model makes use of recurrent neural network architectures known for their ability to learn sequences and identify patterns over time, making it exceptionally powerful in this clinical use case.

Early modeling efforts have proven very promising boasting a high prediction accuracy indicating the viability of a model like this being integrated into the standard medical procedures conducted in the ICU. Due to issues regarding access and complex database structures, the group was only able to completely query data from one out of the three target hospital data systems. While there is still much work to be done on the remaining two data sources, the baseline results established have been used to secure funding for the group through 2022 and these early efforts will have laid the groundwork for a clinically viable model in the future.

The primary research question regarding the STS report is focused on answering how we as a society can navigate the issues posed by deepfakes addressing the concerns over misinformation online and its ability to undermine civil unity. Utilizing the Actor-Network Theory framework, the research sought to identify the primary social groups who have been responsible for the national response towards deepfakes and analyzing the critical bottlenecks that have been thwarting any effective measures from taking place. This exploration was conducted by utilizing technical research papers, proposed laws on both the state and national levels, as well as academic research papers exploring the psychology behind deepfakes and how they are so successful in manipulating individuals.

The understanding of the current network of actors highlighted a fundamental lack of technical understanding which can explain the futile efforts in enacting substantive policies over the past few years. The ultimate conclusion of the STS work is recognizing that deepfakes do not have a technical solution and instead requires human intervention on the societal level in order to prevent chaos and widespread disarray. The proposed third-party independent review board (IRB) seeks to establish a centralized authority on deepfakes on the international level highlighting the global threat of deepfakes which is not limited to any one nation or group of people.

Like all revolutionary innovations, AI has the potential to completely revolutionize our world solving a lot of the biggest issues facing our societies today. With this exciting potential, however, comes dangerous consequences and unanticipated applications that if left unchecked will ruin us before we even have the chance to enjoy the benefits. Staying aware and ahead of these threats is the only way to ensure that we don't end up losing our humanity at the hands of AI.

Degree:
BS (Bachelor of Science)
Keywords:
Actor Network Theory, Deepfake, Machine Learning, Medical AI, Neural Network
Notes:

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Rich Nguyen
STS Advisor: Catherine Baritaud
Technical Team Members: Bobby Andris, Jiaxing Qiu

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2021/05/13