The Impact of Tailoring Agent Explanations According to Human Performance on Human-AI Teamwork
More Info
expand_more
Abstract
Nowadays, artificial intelligence (AI) systems are embedded in many aspects of our lives more than ever before. Autonomous AI systems (agents) are aiding people in mundane daily tasks, even outperforming humans in several cases. However, agents still depend on humans in unexpected circumstances. Thus, the main goal of these agents has transformed from becoming independent to interdependent systems, collaborating with humans. This collaboration is far from perfect and could be improved in several aspects. Communication is crucial for flawless collaboration and its key aspect is explainability. This paper studies the impact of tailoring explanations according to human performance in a well-defined collaborative human-agent teaming (HAT) urban search-and-rescue (USAR) task environment. A controlled experiment was conducted in a between-subject manner, with two different agent implementations, where it was hypothesised that when an agent provides explanations tailored to human performance, the collaborative performance, the trust towards the agent and the individual satisfaction of the human would increase. Results of the experiment confirmed that this is indeed the case for explanation satisfaction, however, not necessarily for trust and performance metrics. The conclusions also included that the tailoring resulted in a decreased collaborative performance. The research contributes to the bigger picture of how tailoring explanations to various factors, would have an impact on the overall collaborative performance and systematic actualisation of HAT.