Computational Models of Engagement in Digital Health Platforms

Author: ORCID icon orcid.org/0000-0001-7886-8078
Baee, Sonia, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Advisor:
Barnes, Laura
Abstract:

Over the past decade, digital health technologies have transformed healthcare delivery offering more scalable, accessible, and individualized interventions. Especially in light of the COVID pandemic, these technologies are an appealing alternative to much-needed treatment outside the physician/clinician’s office. However, these platforms suffer from low engagement and high attrition rates, making it increasingly difficult to turn research findings into clinically useful and actionable insights. This dissertation proposes a set of computational tools for both predicting participants' attrition at different points in a program as well as analyzing their engagement during DMHIs. We apply the proposed methods to community samples in MindTrails, an online digital mental health intervention (DMHI) targeting cognitive bias modification for interpretation (CBM-I) in anxious individuals.

First, I demonstrate the limitations of DMHIs and the challenges associated with attrition rate and engagement. We then propose methods to improve the prediction of participants at high risk of dropping out of DMHIs. To accomplish this, we develop a generalizable attrition prediction pipeline using features from user baseline characteristics, user self-reported contexts and reactions to programs, clinical functioning (e.g., anxiety reduction measurements), and user behavior within intervention (e.g., time on page). Moreover, we evaluate which feature categories contribute the most to the predictive power of the early stage attrition prediction. We examine the performance of the proposed pipeline through extensive experimental evaluations of three CBM-I studies.

Next, we assess the engagement of participants in a multi-session DMHI by modeling their engagement as a sequential decision process. We propose the first inverse reinforcement learning model to infer the internal reward function and policy used by participants during each session of a DMHI. We model the participant's engagement states dynamically, and then use these dynamic states to predict behavioral attrition within a given session. Unlike traditional machine learning algorithms, this model learns sequences of behaviors by treating each as a potential source of reward (engagement). Experiments deploying this model reveal that it outperforms baseline models with fewer features in predicting attrition.

In summary, our findings drive potential future directions for designing engaging DMHIs and better understanding dropout over the course of a DMHI. This in turn will enable researchers and designers at DMHIS to recognize each individual's needs and address them: understanding target users, tailoring interventions, and defining engagement failure points in the program with less data than prior studies.

Degree:
PHD (Doctor of Philosophy)
Keywords:
Human Behavior Interventions, Reinforcement Learning, Digital Mental Health Interventions
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2022/08/03