Learning Multi-Reference Frame Skills from Demonstration with Task-Parameterized Gaussian Processes
More Info
expand_more
Abstract
A central challenge in Learning from Demonstration is to generate representations that are adaptable and can generalize to unseen situations. This work proposes to learn such a representation without using task-specific heuristics within the context of multi-reference frame skill learning by superimposing local skills in the global frame. Local policies are first learned by fitting the relative skills with respect to each frame using Gaussian Processes (GPs). Then, another GP, which determines the relevance of each frame for every time step, is trained in a self-supervised manner from a different batch of demonstrations. The uncertainty quantification capability of GPs is exploited to stabilize the local policies and to train the frame relevance in a fully Bayesian way. We validate the method through a dataset of multi-frame tasks generated in simulation and on real-world experiments with a robotic manipulation pick-and-place re-shelving task.We evaluate the performance of our method with two metrics: how close the generated trajectories get to each of the task goals and the deviation between these trajectories and test expert trajectories. According to both of these metrics, the proposed method consistently outperforms the state-of-the-art baseline, Task-Parameterised Gaussian Mixture Model (TPGMM).
Files
File under embargo until 25-06-2025