To collaborate effectively, humans and AI agents need to trust each other. Communication between teammates is an essential component to achieve this, as it makes the AI system more understandable to humans. However, previous research lacks a focus on ways to build an AI agent's t
...
To collaborate effectively, humans and AI agents need to trust each other. Communication between teammates is an essential component to achieve this, as it makes the AI system more understandable to humans. However, previous research lacks a focus on ways to build an AI agent's trust in its human teammate and, consequently, on how the AI's beliefs can be communicated to the human. As such, this study explores how real-time visual explanations of the AI agent's trust in its human teammate influence human trust and overall satisfaction. Through a user experiment (n=46) conducted on an Urban Search and Rescue simulation, integrating trust explanations was compared against a baseline containing no such information. Results show a statistically significant increase in both human trust and satisfaction when the explanations are provided, highlighting the need for further exploration into methods of communicating trust.