Circular Image

R.S. Verhagen

18 records found

As human-agent collaboration grows increasingly prevalent, it is crucial to understand and enhance the interaction between humans and AI systems. Explainable AI is
fundamental to this interaction, which involves agents conveying essential information to humans for decision-ma ...

Influence of Global Explanations on Human Supervision and Trust in Agent

Explainable AI for human supervision over firefighting robots

With the rise of AI presence in various contexts and spheres of life, ensuring effective
human-AI collaboration, especially in critical domains, is of utmost importance.
Explanations given by AI agent can be of great assistance for this purpose. This study
investigate ...

Explainable AI for Human Supervision over Firefighting Robots

How Do Textual and Visual Explanations Affect Human Supervision and Trust in the Robot?

As artificially intelligent agents become integrated into various sectors, they require
an analysis of their capacity to make moral decisions and of the influence of human
supervision on their performance. This study investigates the impact of textual feature
explanat ...

Explainable AI for human supervision over firefighting robots

The influence of on-demand explanations on human trust

In human-AI agent interactions, providing clear visual or textual explanations for the agent's actions and decisions is crucial for ensuring successful collaboration. This research investigates whether having the visual explanations displayed only on-demand, instead of having the ...
The integration of robots in human-robot teams, particularly in high-stakes environments like firefighting, requires effective communication and decision-making to ensure safety and efficiency. This study explores the impact of adding contrastive explanations to feature attributi ...

Agent Failure and Trust Repair in Human-Agent Teams

Interdependence Impact on Trust Repair Strategy and Collaboration Fluency in Human-AI Team

Interdependence relationships between humans and agents play a crucial role in the collaborative AI field. This research paper examines the impact of interdependence on trust violation, trust repair strategies, and collaboration fluency in human-AI teams. It compares independent ...

The Influence of Interdependence on Trust Repair in Human-Agent Teams

Comparing the Effectiveness of Trust Repair Strategies in Full Independence and Complementary Independence

As autonomous systems are increasingly integrated as a team member for collaborative tasks, trust in human-agent teams (HAT) becomes crucial to foster success. In many real world scenarios, trust violations are expected, thus demanding the use of trust repair strategies to restor ...
Intelligent agents are increasingly required to engage in collaboration with humans in the context of human-agent teams (HATs) to achieve shared goals. Interdependence is a fundamental concept in teamwork. It enables humans and robots to leverage their capabilities and collaborat ...

Agent Failure, Trust Repair, and Fluency in Human-AI Teams

Impact of Opportunistic Interdependence Relationship on Trust Violation, Trust Repair, and on Collaboration Fluency in a Human-Agent Team

Nowadays, Human Autonomy Teams (HATs) are incorporated in many fields, where humans and autonomous agents work collaboratively to combine their capabilities with the ultimate goal of performing tasks more efficiently. In such environments, it is imperative to sustain a high level ...
Explainable AI (XAI) has gained increasing attention from more and more researchers with an aim to improve human interaction with AI systems. In the context of human-agent teamwork (HAT), providing explainability to the agent helps to increase shared team knowledge and belief, th ...
Artificial intelligence systems assist humans in more and more cases. However, such systems' lack of explainability could lead to a bad performance of the teamwork, as humans might not cooperate with or trust systems with black-box algorithms opaque to them. This research attem ...
Nowadays, artificial intelligence (AI) systems are embedded in many aspects of our lives more than ever before. Autonomous AI systems (agents) are aiding people in mundane daily tasks, even outperforming humans in several cases. However, agents still depend on humans in unexpecte ...
Aligning human trust to correspond with an agent's trustworthiness is an essential collaborative element within Human-Agent Teaming (HAT). Misalignment of trust could cause sub-optimal usage of the agent. Trust can be influenced by providing explanations which clarify the agent's ...
Communication is one of the main challenges in Human-Agent Teams (HATs). An important aspect of communication in HATs is the use of explanation styles. This thesis examines the influence of an explainable agent adapting its explanation style to a supervising human team leader on ...
The collaboration between AI agents (Artificial Intelligence) and human is an essential part of achieving complex goals more efficiently. Many aspects are influential in achieving effective teamwork. One of them is trust. In addition, sharing the mental model would improve the un ...
Understanding trust in human-agent teams is of utmost importance if we want to ensure an efficient and effective collaboration. It is well known that predictability is a core component of trust, however it is still unclear what kind of information an agent should share in order t ...
Mutual predictability shows itself as a contributing factor to mutual trust and is known to improve the effectiveness in a human-agent teamwork setting. As team members communicate to coordinate the team through the task, the question arises as to what information the human shoul ...