The integration of robots in human-robot teams, particularly in high-stakes environments like firefighting, requires effective communication and decision-making to ensure safety and efficiency. This study explores the impact of adding contrastive explanations to feature attributi
...
The integration of robots in human-robot teams, particularly in high-stakes environments like firefighting, requires effective communication and decision-making to ensure safety and efficiency. This study explores the impact of adding contrastive explanations to feature attributions in robot explanations on human-robot teamwork during firefighting simulations. Contrastive explanations aim to improve human understanding by highlighting why a robot chose one decision over another using allocations of variables. The experiment involved 40 participants, divided into two groups, each interacting with either the baseline or contrastive version of the robot in the simulated environment. Results indicate that contrastive explanations significantly increased participants' capacity trust in the robot, though they did not significantly affect moral trust. Additionally, the results showed a lower satisfaction level with the explanations given by the robot. The disagreement rate between human decisions and robot actions was lower in the contrastive group, suggesting possible enhanced understanding and agreement with the robot's decisions. These findings underscore the potential of contrastive explanations to enhance trust and collaboration in human-robot teams, paving the way for more effective integration of robots in critical operations. Future research should focus on larger sample sizes and explore the inclusion of contrastive decisions made by the robot alongside explanations to further validate these findings.