In the field of cooperative AI, an environment is created called Overcooked AI based on the popular Overcooked game. Originally the environment is used to study deep reinforcement learning, on the other hand it also allows for cooperative planning methods of which the paper will
...
In the field of cooperative AI, an environment is created called Overcooked AI based on the popular Overcooked game. Originally the environment is used to study deep reinforcement learning, on the other hand it also allows for cooperative planning methods of which the paper will focus on. These methods include coupled based planning with replanning and model-based planning. This research paper attempts to reproduce the results the Overcooked AI environment developers obtained and to improve the Coupled Planning algorithm to gain higher results. In particular, experiments were performed against themselves and a human model for the planning methods, and an improved coupled planning algorithm, in which the failures are handled by deviating from optimal play, under different game steps. And a study on collision failures is performed. The results concluded that extrapolation of results are sub-optimal and that collision failures can be significantly reduced by handling collision differently; walking into the opposite direction.