CJ

356 records found

Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communica ...
Purpose The objective of this paper is to develop a redesigned commissioning process for social care services that fosters integrated care, encourages collaboration and balances professional expertise with client engagement. Design/methodology/approach This study employs a two ...

NegoLog

An Integrated Python-based Automated Negotiation Framework with Enhanced Assessment Components

The complexity of automated negotiation research calls for dedicated, user-friendly research frameworks that facilitate advanced analytics, comprehensive loggers, visualization tools, and auto-generated domains and preference profiles. This paper introduces NegoLog, a platform th ...
With the growing capabilities and pervasiveness of AI systems, societies must collectively choose between reduced human autonomy, endangered democracies and limited human rights, and AI that is aligned to human and social values, nurturing collaboration, resilience, knowledge and ...
Appropriate trust is an important component of the interaction between people and AI systems, in that "inappropriate"trust can cause disuse, misuse, or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from ...
Disagreements are common in online societal deliberation and may be crucial for effective collaboration, for instance in helping users understand opposing viewpoints. Although there exist automated methods for recognizing disagreement, a deeper understanding of factors that influ ...
Large-scale survey tools enable the collection of citizen feedback in opinion corpora. Extracting the key arguments from a large and noisy set of opinions helps in understanding the opinions quickly and accurately. Fully automated methods can extract arguments but (1) require lar ...
We adopt an emerging and prominent vision of human-centred Artificial Intelligence that requires building trustworthy intelligent systems. Such systems should be capable of dealing with the challenges of an interconnected, globalised world by handling plurality and by abiding by ...
Relevancy is a prevalent term in value alignment. We either need to keep track of the relevant moral reasons, we need to embed the relevant values, or we need to learn from the relevant behaviour. What relevancy entails in particular cases, however, is often ill-defined. The reas ...
In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthi ...

Commissioning for integration

Exploring the dynamics of the “subsidy tables” approach in Dutch social care delivery

Purpose: The objective of this paper is to develop a redesigned commissioning process for social care services that fosters integrated care, encourages collaboration and balances professional expertise with client engagement. Design/methodology/approach: This study employs a two- ...
Epistemic logic can be used to reason about statements such as ‘I know that you know that I know that φ ’. In this logic, and its extensions, it is commonly assumed that agents can reason about epistemic statements of arbitrary nesting depth. In contrast, empirical findings on Th ...
Presenting high-level arguments is a crucial task for fostering participation in online societal discussions. Current argument summarization approaches miss an important facet of this task-capturing diversity-which is important for accommodating multiple perspectives. We introduc ...
As human-machine teams become a more common scenario, we need to ensure mutual trust between humans and machines. More important than having trust, we need all teammates to trust each other appropriately. This means that they should not overtrust or undertrust each other, avoidin ...

Nudging human drivers via implicit communication by automated vehicles

Empirical evidence and computational cognitive modeling

Understanding behavior of human drivers in interactions with automated vehicles (AV) can aid the development of future AVs. Existing investigations of such behavior have predominantly focused on situations in which an AV a priori needs to take action because the human has the rig ...
Support agents that help users in their daily lives need to take into account not only the user’s characteristics, but also the social situation of the user. Existing work on including social context uses some type of situation cue as an input to information processing techniques ...
Establishing an appropriate level of trust between people and AI systems is crucial to avoid the misuse, disuse, or abuse of AI. Understanding how AI systems can generate appropriate levels of trust among users is necessary to achieve this goal. This study focuses on the impact o ...
For personal assistive technologies to effectively support users, they need a user model that records information about the user, such as their goals, values, and context. Knowledge-based techniques can model the relationships between these concepts, enabling the support agent to ...
Values, such as freedom and safety, are the core motivations that guide us humans. A prerequisite for creating value-aligned multiagent systems that involve humans and artificial agents is value inference, the process of identifying values and reasoning about human value preferen ...
Channel allocation in dense, decentralized Wi-Fi networks is a challenging due to the highly nonlinear solution space and the difficulty to estimate the opponent’s utility model. So far, only centralized or mediated approaches have succeeded in applying negotiation to this settin ...