The traumatic loss of a hand is a horrific experience usually followed by significant psychological, functional and rehabilitation challenges. Even though much progress has been made in the past decades, the prosthetic challenge of restoring the human hand functionality is still
...
The traumatic loss of a hand is a horrific experience usually followed by significant psychological, functional and rehabilitation challenges. Even though much progress has been made in the past decades, the prosthetic challenge of restoring the human hand functionality is still far from being achieved. Autonomous prosthetic hands showed promising results and wide potential benefits; benefits that must be still explored. Here, we hypothesized that a combination of a radar sensor and a low-resolution time-of-flight camera can provide sufficient spatial and temporal information for deep learning algorithms to detect object shapes and materials in static and dynamic scenarios. To test this hypothesis, we analysed HANDdata, a recent human-object interaction dataset with a particular focus on reach-to-grasp actions, via both common and novel deep learning algorithms. The offline analyses reported here showed a great potential for both static and dynamic object characteristics recognition meant for autonomous grasping. Results seem to suggest modern, low-power radar as a potential key technology for next-generation intelligent and autonomous prostheses.