In biomechanics, human movement studies are carried out to assess the subject’s kinematic and kinetic variables for a healthy gait. Currently, marker-based systems are the standardized method to extract the kinematic variables of subjects. The marker-based systems pose some serio
...
In biomechanics, human movement studies are carried out to assess the subject’s kinematic and kinetic variables for a healthy gait. Currently, marker-based systems are the standardized method to extract the kinematic variables of subjects. The marker-based systems pose some serious challenges like cost and portability, and the calibration and synchronization of multiple cameras and sensors are among the other practical challenges. The AI technique often referred as markerless pose estimation methods can overcome these challenges and aid biomechanists and clinicians. Thus, there is a need to develop new deep-learning models that can regress the musculoskeletal model directly from images and videos. However, the deep-learning models are dependent on the quality and quantity of training data. In the current scenario, training data for markerless pose estimation are dependent on the redundant marker-based systems and the challenges persist. To aid this, it is necessary to create a statistical human model or a skinned human animated motion from a biomechanical model to build more training data. From the skinned virtual data consisting of realistic movements, deep-learning models can be trained. Therefore, the aim of the research was to build a pipeline to develop a human-animated model from a musculoskeletal model i.e., the OpenSim model. Two different motions such as walking and running are illustrated as qualitative results. The gait pattern for walking and running motions are realistic from both the frontal and sagittal planes. Furthermore, the deep learning model (D3KE) built by Marian et. al was also evaluated on the animated human motions eg. walking motion from the above pipeline to validate the model. The performance of D3KE is evaluated from different planes of camera views and also a comparison between the upper and lower extremities. The evaluation and comparison are based on two metrics MAEangles (Mean Absolute Error of angles, in radians) and MPBLPE (Mean Per Bony Landmark Position Error, in cm). The MAEangles and MPBLPE are better when observed from the frontal plane rather than from the sagittal plane as the plane of view. Also, the joint angles in the upper extremity show better results compared to the lower extremity. Although, the predictions of the joint angles are way off from the ground truth. This opens the way to perform a feasibility study to optimize joint angles by a pixel loss refinement technique. The findings and remarks on the pixel-loss refinement is tabulated as results.