A conceptual framework to guide the design and evaluation of human interaction with information automation to support judgment

Baumgart, Leigh, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Gerling, Gregory, Department of Systems and Information Engineering, University of Virginia
Harrison, James, Department of Public Health Sciences, University of Virginia

In critical domains from air traffic control to health care, humans make judgments by using informational cues to assess the true state of the environment. How humans judge environmental conditions is now ubiquitously supported by automation. While prior modeling and analysis has defined levels of automated decision support and methods to evaluate independent human and automated judges, we have yet to develop ways to both support and evaluate human judgment.

To address this gap, this research develops a conceptual design and evaluation framework, titled the Expanded Lens Model with Automation (ELMA). To provide design support, ELMA accounts for discrepancies between how cues in the environment are transformed into operator displays via automated processes. The transformation is based upon the desired, hierarchical level of cognitive judgment support, from cue perception, to cue comprehension, to an automated assessment, to explanation of the automated assessment. In addition to design support, ELMA includes quantitative evaluation measures, using multiple linear regression and correlation analysis to characterize the achievement, consistency, and task knowledge of the human judge; potential and accuracy of automation; and predictability of the environment.

ELMA’s utility is demonstrated through the investigation of two tasks: 1) judging the probability of an air traffic conflict using heading and speed cues and 2) judging the quality of population-based hypertension care using cues on patient outcomes (e.g., blood pressure) and processes (e.g., medications prescribed). Across both tasks, ELMA revealed that when participants were supported at the cognitive level of cue comprehension they achieved significantly higher judgment achievement compared to those supported at the lower level of cue perception. Decomposition of achievement indicated that these differences were predominantly due to the consistency of individuals in executing judgments rather than task knowledge. Additionally, reliability across participants was significantly higher for participants with cue perception and cue comprehension support compared to participants with automated assessment support in the quality of hypertension care task.

ELMA is a useful tool for systems engineers as it provides both a systematic framework to inform automation design choices and a quantitative method to evaluate human-automation judgment systems.

PHD (Doctor of Philosophy)
human-automation interaction, judgment, judgment analysis, level of automation, automation display content
All rights reserved (no additional license for public reuse)
Issued Date: