FO

153 records found

Authored

The Association for the Advancement of Artifcial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2018 Spring Symposium Series, held March 26-28, 2018, on the campus of Stanford University. The seven symposia held were AI and S ...

Interactive Learning and Decision Making

Foundations, Insights & Challenges

Designing "teams of intelligent agents that successfully coordinate and learn about their complex environments inhabited by other agents (such as humans)" is one of the major goals of AI, and it is the challenge that I aim to address in my research. In this paper I give an overvi ...

The MADP Toolbox

An Open Source Library for Planning and Learning in (Multi-)Agent Systems

This article describes the MultiAgent Decision Process (MADP) toolbox, a software library to support planning and learning for intelligent agents and multiagent systems in uncertain environments. Key features are that it supports partially observable environments and stochastic t ...
This article contains the reports of the AI for Human-Robot Interaction, Cognitive Assistance in Government and Public Sector Applications, Deceptive and Counter-Deceptive Machines, Self-Confidence in Autonomous Systems, and Sequential Decision Making for Intelligent Agents sympo ...
In cooperative multi-agent sequential decision making under uncertainty, agents must coordinate to find an optimal joint policy that maximises joint value. Typical algorithms exploit additive structure in the value function, but in the fully-observable multi-agent MDP (MMDP) sett ...
Over the last decade, methods for multiagent planning under uncertainty have increased in scalability. However, many methods assume value factorization or are not able to provide quality guarantees. We propose a novel family of influence-optimistic upper bounds on the optimal val ...

The MADP Toolbox

An Open-Source library for planning and learning in (Multi-)Agent systems

This article describes the MultiAgent Decision Process (MADP) toolbox, a software library to support planning and learning for intelligent agents and multiagent systems in uncertain environments. Some of its key features are that it supports partially observable environments a ...

Nowadays, multiagent planning under uncertainty scales to tens or even hundreds of agents. However, current methods either are restricted to problems with factored value functions, or provide solutions without any guarantees on quality. Methods in the former category typically bu ...

Planning under uncertainty poses a complex problem in which multiple objectives often need to be balanced. When dealing with multiple objectives, it is often assumed that the relative importance of the objectives is known a priori. However, in practice human decision makers of ...

Decentralized POMDPs (Dec-POMDPs) are becoming increasingly popular as models for multiagent planning under uncertainty, but solving a Dec-POMDP exactly is known to be an intractable combinatorial optimization problem. In this paper we apply the Cross-Entropy (CE) method, a re ...

In this paper we focus on distributed multiagent planning under uncertainty. For single-agent planning under uncertainty, the partially observable Markov decision process (POMDP) is the dominant model (see [Spaan and Vlassis, 2005] and references therein). Recently, several ge ...

Contributed

Sequential Decision Making for Intelligent Agents

Papers from the AAAI Fall Symposium

Sequential decision making under uncertainty has gained significant traction in Artificial Intelligence. In many applications, dealing explicitly with uncertainty regarding the effects of actions, state of the environment and possibly the behavior of other agents is crucial to ac ...