Human-Agent Teaming: A Reinforcement Learning Approach to Trust Dynamics Integration and Collaboration

Author:
Jafari Meimandi, Kiana, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Advisor:
Bolton, Matthew, Systems Engineering, University of Virginia
Abstract:

In the dynamic intersection of artificial intelligence (AI) and human collaboration, this dissertation presents a comprehensive exploration of human-agent teaming (HAT), a field dedicated to enhancing the synergies between human capabilities and machine intelligence. Through a systematic and interdisciplinary approach, this body of work seeks to understand and improve the collaboration, communication, and decision-making processes within HAT. The dissertation introduces a novel framework, grounded in the principles of Reinforcement Learning (RL), that extends traditional RL constructs to incorporate social considerations, situational awareness, and mental models, with an emphasis on exploring the critical role of social dynamics and trust in effective teaming.

Diving deeper into the nuances of trust within human-AI interactions, this dissertation challenges conventional trust assessment methods by introducing an innovative approach that analyzes gameplay behaviors as implicit indicators of trust levels. By employing the Overcooked-AI environment, it is demonstrated how non-verbal cues and action patterns can serve as reliable predictors of trust, offering a more efficient alternative to traditional questionnaire-based methods. This shift towards an action-oriented assessment of trust paves the way for real-time trust calibration in adaptive systems, fundamentally altering the landscape of human-AI collaboration.

Further enriching the discourse on trust dynamics, the dissertation integrates a trust prediction model into an RL framework within the Overcooked-AI environment. This integration facilitates real-time prediction of dynamic trust levels and adjusts the agent's reward strategy to facilitate trust-building actions. The implementation of shaped and sparse rewards, along with a bonus point mechanism, ensures that the RL agent prioritizes trust within its decision-making processes. Experimental results validate the efficacy of this trust-aware approach, demonstrating superior performance in human-proxy gaming scenarios compared to standard non-trust-aware agents.

This work bridges the gap between AI and human-machine interaction research, offering new perspectives and methodologies for enhancing HAT. By synthesizing systems-theoretic insights with RL principles and trust dynamics, it lays the groundwork for future innovations in the field, promising more adaptive, effective, and trustworthy human-agent collaborations.

Degree:
PHD (Doctor of Philosophy)
Keywords:
Human-Agent Teaming, Human-AI Collaboration, Trust, Reinforcement Learning
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2024/04/23