Learning from Few Demonstrations with Frame-Weighted Motion Generation
More Info
expand_more
Abstract
Learning from Demonstration (LfD) aims to learn versatile skills from human demonstrations. The field has been gaining popularity since it facilitates transferring knowledge to robots without requiring much expert knowledge. During task executions, the robot motion is usually influenced by constraints imposed by environments. In light of this, task-parameterized (TP) learning encodes relevant contextual information in reference frames, enabling better skill generalization to new situations. However, most TP learning algorithms require multiple demonstrations in various environment conditions to ensure sufficient statistics for a meaningful model. It is not a trivial task for robot users to create different situations and perform demonstrations under all of them. Therefore, this paper presents a novel concept to learn motion policy from few demonstrations through explicitly solving reference frame weights along the task trajectory. Experimental results in both simulation and real robotic environments validate our approach.