Fine-Grained Activity Modeling, Recognition, and Error Analysis in Robot-Assisted Surgery

Author:
Hutchinson, Kay, Electrical Engineering - School of Engineering and Applied Science, University of Virginia
Advisor:
Alemzadeh, Homa, EN-Elec & Comp Engr Dept, University of Virginia
Abstract:

Surgical robots are complex cyber-physical systems driving the development of new features and technologies to improve efficiency and patient safety in surgery. This dissertation focuses on three major challenges that hinder the development of surgical robots: small datasets with low diversity and incompatible activity labels, limited generalizability and interpretability of black box activity recognition models, and limited attention to identifying executional errors during surgeon training and skill assessment. We propose a formal framework for the fine-grained modeling of surgical tasks using context and motion primitives. This framework directly relates interactions among tools and objects within the surgical environment encoded as context, to surgical workflow described by motion primitives, and enables the modeling of tasks as finite state machines. Then, we develop a method for labeling context based on video data that results in objective, fine-grained annotations with near-perfect agreement among non-expert annotators and expert surgeons. Using our framework, we create the COntext and Motion Primitive Aggregate Surgical Set (COMPASS) containing kinematic and video data with consistent labels from six different surgical dry lab tasks that nearly triples the amount of data for modeling and analysis. To understand the relationship between different levels of granularity, we use this framework and dataset to develop models for the inference of surgical context from video data and the recognition of motion primitives and gestures from kinematic data. We propose the novel Leave-One-Task-Out (LOTO) cross validation setup that evaluates the generalizability of activity recognition models to unseen tasks. We find that aggregating data across datasets supports the task-generalization of motion primitive recognition models. Then, we develop novel activity-aware methods for the identification and analysis of errors based on the surgical task and activity recognition models created in the previous thrusts. We use these methods to identify interpretable patterns in fine-grained surgical activities that strongly correlate with measures of surgical skill and can be used to improve feedback in surgeon training and skill assessment.

Degree:
PHD (Doctor of Philosophy)
Keywords:
robotic surgery, activity modeling, activity recognition, error analysis
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2023/12/11