The Role of Reward, Risk, and Behavior in Trust Formation and Collaboration Among Multi-Agent Systems

Author:
Kapadia, Nikesh, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Advisor:
Gerber, Matthew, Department of Systems Engineering, University of Virginia
Abstract:

The concept of trust is diverse and widely used to understand dynamics within multi-agent systems (MAS). Various academic disciplines study trust to understand the interactions and decisions of humans and/or artificial agents. We define trust as the extent to which an agent is willing to take on the risk governed by the behavior of another agent.

The following research formulates trust as a decision process under the reinforcement learning (RL) framework. Distinct from previous work, trust is formalized as an action enabling meaningful measurement of the construct as the expected return with consideration to the variance of the partner's behavior. The framework facilitates the investigation of the role of reward, risk, and partner behavior within trust formation and collaboration between the agents. We examine these characteristics among two agents operating in a gridworld simulated environment.

We find that having information on the partner's behavior, and the ability to take risks are crucial aspects for trust formation. When agents make risk-conscious decisions upwards to 62.54 \% rates of mutual collaboration can be achieved. However, there is a trade-off where high values of trust can lead to over-trust situations; situations where one agent trusts the other agent to its own detriment. Then, the agent must adapt how much risk it is willing to assume, to control for these mis-coordinated outcomes.

We propose several avenues for future work in which the framework estimates and integrates risk into the agent's decision-making process. The framework can be used to further articulate interdependencies and the characterization of interactions, and expanded to larger multi-agent systems.

Degree:
MS (Master of Science)
Keywords:
trust modeling, multi-agent systems, human-machine collaboration, risk, social dilemmas
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2019/04/24