This study proposes the creation of a multi-modal feedback system to guide humans towards ergonomic poses. A number of studies have tried to come up with methods where subjects are alerted upon crossing biomechanical or ergonomic thresholds while doing a task but not many have tr
...
This study proposes the creation of a multi-modal feedback system to guide humans towards ergonomic poses. A number of studies have tried to come up with methods where subjects are alerted upon crossing biomechanical or ergonomic thresholds while doing a task but not many have tried to successfully and efficiently guide users to ergonomic positions after having alerted them. Through this study we propose the creation of a multi-modal feedback system comprising of a visual and a speech based audio feedback and hypothesize that the proposed system will lead to a better performance as compared to the other feedback modalities when trying to guide users from one pose to another. During our study we have conducted two sets of experiments to carry out a comparative study between only audio, only visual and the proposed multi-modal feedback system to try and find the modality most effective and successful in guiding humans for pose corrections and a comparative study between two types of speech based audio feedbacks in joint space and end point space to motivate our choice for using the more desired one between the two for our proposed system.
Speech based feedback in joint space came out as the preferred audio feedback due to its ability to allow users to carry out efficient and coordinated inter-joint movements especially in cases of high redundancy whereas the proposed multi-modal feedback system successfully shows its superiority over the other feedback
modalities by showing equivalent results against the benchmark visual feedback when measured objectively and better results when measured subjectively due to its ability to successfully combine the advantages of audio and visual feedback and at the same time, avoid their limitations.