How do adaptive explanations that become more abstract over time influence human supervision over and trust in the robot?

More Info
expand_more

Abstract

As human-agent collaboration grows increasingly prevalent, it is crucial to understand and enhance the interaction between humans and AI systems. Explainable AI is
fundamental to this interaction, which involves agents conveying essential information to humans for decision-making. This paper investigates how adaptive explanations affect human supervision and trust in robotic systems. The study included 40 participants and compared baseline (non-adaptive) explanations with adaptive explanations. The results showed no significant difference between the two types of explanations; making explanations more abstract did not necessarily improve human supervision or increase trust in robots.

Files

CSE3000_Final_Paper.pdf
(pdf | 0.997 Mb)
Unknown license