Applications of Digital Health and Patient Monitoring in Opioid Addiction Recovery; Modernizing Regulatory Practices for Artificial Intelligence Driven Medical Tools
Handa, Rishub, School of Engineering and Applied Science, University of Virginia
Graham, Daniel, EN-Comp Science Dept, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Digital technologies are increasingly gaining prevalence in healthcare settings. Physicians are utilizing digital health to diagnose patients with diseases, plan a treatment course for the patient, and stay connected with them outside of the clinic. While these tools can improve patient outcomes and reduce burnout in physicians, they can also be misused and leave providers dependent on their convenience. These technologies do not have full contextual awareness, and thus can introduce algorithmic biases from their design and the data used for learning if they are not implemented with human supervision. The challenge I sought to address is how can we responsibly utilize digital health tools to improve patient care while mitigating its associated risks for bias. Specifically, in my technical project, I investigated how these tools could be used to strengthen the connection between doctors and patients in opioid addiction recovery. Meanwhile, in my STS research project, I analyzed the gaps in our current regulatory system that enable algorithmic bias in healthcare through a case study of the Optum Future Cost Algorithm.
In my technical research project, I explore the possible benefits of utilizing digital health applications in treatment for Opioid Use Disorder (OUD). 3 million Americans currently suffer from OUD, and this led to over 100,000 deaths in 2021. Unfortunately, for patients looking to address this disease, they have an incredibly low likelihood of recovery; 91% of patients relapse at least once, and 80% of patients relapse within a month of a detoxification program. This low success rate is largely caused by a lack of continuity in care outside of the clinic. These patients often live in environments that are not conducive to recovery, so it is important that they stay in touch with their care team even while at home. To address this gap in recovery and decrease the risk of relapse, I developed a digital health service that monitors when patients consume their opioid withdrawal medication and connects them to appropriate resources at times of high risk with a Digital Therapy Chatbot (DTC). Digital therapy has been utilized to combat other mental health illnesses like depression, and anxiety, and I found that patients responded well to this treatment program for OUD as well. Through two pilot studies, the Net Promoter Score for this service was 92%, indicating that 11 out of 12 patients would highly recommend this system to a friend in recovery. Through patient and provider interviews it seems that this medication tracker and DTC is scalable and improves patient optimism towards their recovery. This technical project illustrated how digital health solutions can be used to strengthen the relationship between doctors and patients.
Over-reliance on digital health technologies, however, can lead to potentially fatal and costly outcomes for patients. Without supervision from healthcare providers, artificial intelligence (AI) digital health systems can introduce bias in their decisions. The issue with the current understanding of algorithmic bias in healthcare is that the literature largely focuses on the technical solutions, such as improving how data is sourced and how the model is designed. While these considerations are important in mitigating bias, my project investigated how regulatory systems can be modernized to provide more relevant procedures for AI in healthcare. Such was the case with the Optum Future Cost Algorithm, which introduced racial bias in its algorithm, preventing many Black patients from receiving the care they needed. While the literature suggests this can be attributed to an improper design choice in the model, I claim the larger issue is that this model was deployed without any oversight from the FDA. The currently FDA procedures for Software as a Medical Device (SaMD) are ineffective at regulating AI tools. Due to legal ambiguities, many companies are able to avoid regulation. Those that are regulated do not have any ongoing evaluation, even though the models learn and adapt over time, which can further reinforce its biases. I concluded that to reduce bias in digital health and risk to patients, we should explore how the FDA can improve their practices to better align with how AI tools are developed and deployed.
Overall, I was satisfied with the results of my technical and STS projects. My research demonstrated how digital health can be useful for improving patient outcomes, but relying on these systems without federal and physician oversight can be harmful. The next step for my technical project would be to scale the pilots to more clinics to diversify the sample across patients of different demographic backgrounds. Deploying these technologies at scale would also require additional design considerations, since the current approach of 3D printing the devices and hosting the application a single server is not feasible. To build on my STS research, the next step would be to collaborate with policy researchers and machine learning engineers to develop better guidelines for regulating AI in healthcare. Further work in both of these projects would help to modernize healthcare while improving patient safety.
BS (Bachelor of Science)
Artificial Intelligence, Digital Health, Opioid Addiction Recovery, Algorithmic Bias
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Daniel Graham, Rosanne Vrugtman
STS Advisor: Kent Wayland
English
2022/05/11