Conflicting demonstrations in Inverse Reinforcement Learning

More Info
expand_more

Abstract

This paper aims to investigate the effect of conflicting demonstrations on Inverse Reinforcement Learning (IRL). IRL is a method to understand the intent of an expert, by only feeding it demonstrations of that expert, which may be a promising approach for areas such as self driving vehicles, where there are a lot of demonstrations from experts. This paper aims to investigate the effect of conflicting demonstrations on IRL. Demonstrations may not always come from the same expert or the expert may prioritize different goals at times. For example, a driver may not always do grocery shopping at the same store or they may take a slightly different route on different occasions. The results showcase a negative effect from severely conflicting demonstrations on the ability of Max Entropy IRL to recover rewards, but do show some slightly optimistic results on more than two goals.