Tactile-Based Self-supervised Pose Estimation for Robust Grasping
More Info
expand_more
Abstract
We consider the problem of estimating an object’s pose in the absence of visual feedback after contact with robotic fingers during grasping has been made. Information about the object’s pose facilitates precise placement of the object after a successful grasp. If the grasp fails, then knowing the pose of the object after the grasping attempt is made can also help re-grasp the object. We develop a data-driven approach using tactile data that computes the object pose in a self-supervised manner after the object-finger contact is established. Additionally, we evaluate the effects of various feature representations, machine learning algorithms, and object properties on the pose estimation accuracy. Unlike other existing approaches, our method does not require any prior knowledge about the object and does not make any assumptions about grasp stability. In experiments, we show that our approach can estimate object poses with at least 2 cm translational and 20∘ rotational accuracy despite changed object properties and unsuccessful grasps.