Toward a Multi-Modal, Interactive and Smart Cognitive Assistant for Emergency Response

Author: ORCID icon
Rahman, M Arif Imtiazur, Computer Science - School of Engineering and Applied Science, University of Virginia
Stankovic, John, EN-Comp Science Dept, University of Virginia

Emergency Medical Services (EMS) providers communicate extensively with many different stakeholders in emergency scenarios to ensure that the correct measures are taken and adverse outcomes are minimized. While communicating, the severity of the scene as well as the condition of the injured patients are often mentioned. Although state-of-the-art technologies such as noise-canceling microphones, smartwatches, and other devices aid the communication and recovery procedure, EMS training and providing care in emergency scenarios still remain very challenging and mostly manual-effort dependent. Most emergency scenes demand dynamic information flow, such as changing vitals, changing medication dosage, etc. which makes the task even more difficult. Previously, very few research have focused on building solutions that reduce the cognitive overload on the care providers, and provide interactive assistance based on the quality of the activity. This thesis presents novel research solutions for developing an automated cognitive assistant for EMS providers. Our research attempts to move the state-of-the-art toward a more comprehensive and automation orientated EMS intervention by utilizing natural language processing and transformer based language models on EMS textual corpus; and by effectively combining deep learning and attention mechanisms on data from smartwatch-based sensors and image data. The following research contributions with evaluations are presented. First, the thesis demonstrates the implementation of GRACE - a natural language processing based component to address formal documentation or reporting of critical information for emergency response. Second, the thesis presents an on-scene, data-driven, and protocol-specific framework, emsReACT, for interactive and personalized feedback to EMS providers during EMS training sessions and mock real-time incidents for cardiac arrest related cases. Third, a robust language model EMS-BERT is developed, for understanding the clinical concepts from live and existing EMS corpus. Fourth, two models SenseEMS and EgoCap are presented; for hand activity detection, monitoring, and real-time quality assessment, and a dataset development method for vision based EMS assistance, respectively. SenseEMS uses deep neural networks on smartwatch-based sensor data from the care providers. EgoCap dataset is developed by first-person captioning of images, which can be potentially used for scene understanding with contextual and visual features. The research results include working with regional EMS providers and certified EMS personnel, and involve real-life data collection and evaluation to show the effectiveness of each of the components. To summarize, the evaluation presented in this thesis successfully supports the hypothesis of the value of developing a cognitive assistant for EMS providers, and implies a successful feasibility of cognitive assistants for broader safety-critical domains.

PHD (Doctor of Philosophy)
Emergency Medical Services (EMS), Interactive Cognitive Assistant, Information Extraction
Sponsoring Agency:
National Institute of Standards and Technology (NIST)

NIST awards: 60NANB17D162 and 70NANB21H029.

All rights reserved (no additional license for public reuse)
Issued Date: