Low-Power Gesture Recognition Using Convolutional Neural Networks and Ambient Lighting
More Info
expand_more
Abstract
This paper presents a study focused on developing an efficient signal processing pipeline and identifying suitable machine learning models for real-time gesture recognition using a testbed consisting of an Arduino Nano 33 BLE and three OPT101 photodiodes. Our research aims to address the challenges of limited computational power whilst maintaining a high inference accuracy.
Experiments were conducted to optimise the signal processing and explore various machine learning model architectures, specifically revolving around convolutional neural networks. The data used for these experiments was gathered by creating a dataset of gestures from left- and right-handed participants. We took ethical considerations regarding participant recruitment and data security into account and we made sure to balance the dataset with both left- and right-handed participants as much as possible.
We obtained accurate gesture recognition results, surpassing the goal of a 75% success rate. Our machine learning models, trained on pre-processed 2D data, achieved near real-time inference times while running on the resource-constrained Arduino Nano 33 BLE.
The findings of this study contribute to the field of gesture recognition by providing insights into efficient signal processing techniques and identifying suitable machine learning models for resource-constrained devices. The developed system can be applied in various applications, ranging from games to healthcare. Furthermore, a dataset is contributed which can be used for further research.