Explainable AI for human supervision over firefighting robots
The influence of on-demand explanations on human trust
More Info
expand_more
Abstract
In human-AI agent interactions, providing clear visual or textual explanations for the agent's actions and decisions is crucial for ensuring successful collaboration. This research investigates whether having the visual explanations displayed only on-demand, instead of having them consistently shown as the baseline, has an impact on the human supervisor's level of confidence and satisfaction with the AI agent. Therefore, a case study of 40 participants was conducted to explore this hypothesis and the participants were divided into 2 groups, one interacting with the on-demand condition, and the other with the baseline one. Through questionnaires, the participants' capacity and moral trust in the robot, the explainable artificial intelligence satisfaction, and the disagreement rate with the robot's decisions have been collected. Demographic data was gathered from the participants to explore whether their background could impact the collaboration. This data included the participants' gender, age, education, gaming experience, risk propensity, trust propensity, and utilitarianism. The resulting statistical analyses indicated no significant differences between the baseline and the on-demand conditions concerning trust and explanation satisfaction. This suggests that the overall collaboration was not primarily impacted by the frequency of visual explanations requested on demand. Although the results implied a high satisfaction with the interaction, further studies with more diverse user groups are recommended. Overall, this research reinforces the importance of transparency in decision-making processes during the collaboration between an AI agent and a human supervisor.