Gaze-Guided 3D Hand Motion Prediction for Detecting Intent in Egocentric Grasping Tasks

More Info
expand_more

Abstract

Human intention detection with hand motion prediction is critical to drive the upper-extremity assistive robots. However, the traditional methods relying on physiological signal measurement are restrictive and often lack environmental context. We propose a novel approach that integrates gaze information, historical hand motion sequences, and environmental object data to predict future sequences of intended hand poses, adapting dynamically to the assistive needs of the patient without prior knowledge of the intended object for grasping. Specifically, we propose to use a vector-quantized variational autoencoder for robust hand pose encoding with an autoregressive generative transformer for effective hand motion sequence prediction. We demonstrate the usability of these novel techniques in a pilot study with healthy subjects. To train and evaluate the proposed method, we collect a dataset consisting of various types of grasp actions on different objects from multiple subjects. Through extensive experiments, we demonstrate that the proposed method can successfully predict sequential hand movement. Especially, the gaze information shows significant enhancements in prediction capabilities, particularly with fewer input frames, highlighting the potential of the proposed method for real-world applications.