Circular Image

46 records found

Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communica ...
As human-machine teams become a more common scenario, we need to ensure mutual trust between humans and machines. More important than having trust, we need all teammates to trust each other appropriately. This means that they should not overtrust or undertrust each other, avoidin ...
Appropriate trust, trust which aligns with system trustworthiness, in Artificial Intelligence (AI) systems has become an important area of research. However, there remains debate in the community about how to design for appropriate trust. This debate is a result of the complex na ...
Introduction: Humans and robots are increasingly collaborating on complex tasks such as firefighting. As robots are becoming more autonomous, collaboration in human-robot teams should be combined with meaningful human control. Variable autonomy approaches can ensure meaningful hu ...
In human-machine teams, the strengths and weaknesses of both team members result in dependencies, opportunities, and requirements to collaborate. Managing these interdependence relationships is crucial for teamwork, as it is argued that they facilitate accurate trust calibration. ...
Appropriate trust is an important component of the interaction between people and AI systems, in that "inappropriate"trust can cause disuse, misuse, or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from ...

Explainable AI for All

A Roadmap for Inclusive XAI for people with Cognitive Disabilities

Artificial intelligence (AI) is increasingly prevalent in our daily lives, setting specific requirements for responsible development and deployment: The AI should be explainable and inclusive. Despite substantial research and development investment in explainable AI, there is a l ...
This paper explores the potential of conversational intermediary AI (CIAI) between patients and healthcare providers, focusing specifically on promoting healthier lifestyles for Type 2 diabetes. CIAI aims to address the constraint of limited healthcare provider time by acting as ...
Agent-based training systems can enhance people's social skills. The effective development of these systems needs a comprehensive architecture that outlines their components and relationships. Such an architecture can pinpoint improvement areas and future outlooks. This paper pre ...
In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthi ...
Establishing an appropriate level of trust between people and AI systems is crucial to avoid the misuse, disuse, or abuse of AI. Understanding how AI systems can generate appropriate levels of trust among users is necessary to achieve this goal. This study focuses on the impact o ...
Increased levels of user control in learning systems is commonly cited as good AI development practice. However, the evidence as to the effect of perceived control over trust in these systems is mixed. This study investigated the relationship between different trust dimensions an ...
Introduction: Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. ...
For human-agent teams to be successful, agent explanations are crucial. These explanations should ideally be personalized by adapting them to intended human users. So far, little work has been conducted on personalized agent explanations during human-agent teamwork. Therefore, an ...
For personal assistive technologies to effectively support users, they need a user model that records information about the user, such as their goals, values, and context. Knowledge-based techniques can model the relationships between these concepts, enabling the support agent to ...

What Can I Do to Help You?

A Formal Framework for Agents Reasoning About Behavior Change Support for People

Changing one’s behavior is difficult, so many people look towards technology for help. However, most current behavior change support systems are inflexible in that they support one type of behavior change and do not reason about
how that behavior is embedded in larger behavio ...
Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. Therefore, a deep understanding of effective human-AI collaboration within the team contex ...
Human-AI teams count on both humans and artificial agents to work together collaboratively. In human-human teams, we use trust to make decisions. Similarly, our work explores how an AI can use trust (in human teammates) to make decisions while ensuring the team’s goal and mitigat ...

Piecing Together the Puzzle

Understanding Trust in Human-AI Teams

With the increasing adoption of Artificial intelligence (AI) as a crucial component of business strategy, establishing trust between humans and AI teammates remains a key issue. The project “We are in this together” highlights current theories on trust in Human-AI teams (HAIT) an ...
Virtual patients (VPs) offer an affordable and feasible method to train individuals when compared to patient-actors. They can provide training on communication skills, such as motivational interviewing and conflict resolution, and often facilitate a change in patients’ thinking a ...