Gaze-Based Activity Recognition with a LSTM

More Info
expand_more

Abstract

Classification of sedentary activities using gaze tracking data can be of great use in fields such as teaching, human-computer interaction and surveilling. Conventional machine learning methods such as k-nearest neighbours, random forest and support vector machine might be used to classify such activities, but this requires knowledge about the domain to extract features. Deep learning methods such as a long-short term memory neural network do not require manual feature extraction and are therefore more accessible. To test the feasibility of using these deep learning models, this paper answers the question: Can a long short-term memory neural network (LSTM) be used for gaze-based activity recognition? It was found that a LSTM is highly suitable for user-dependent testing data with an average accuracy of 96.61%. For user-independent testing data the LSTM is less suitable with an average accuracy of 43.34%.