Introduction
Vocal expression of emotions is essential for social communication. Research has shown that children with impaired hearing have difficulties with accurately identifying emotions, which can limit their prosodic expression. Research on the ability of hearing-impair
...
Introduction
Vocal expression of emotions is essential for social communication. Research has shown that children with impaired hearing have difficulties with accurately identifying emotions, which can limit their prosodic expression. Research on the ability of hearing-impaired children to express emotions through speech is limited. Moreover, existing studies have focused on evaluating basic prosodic features, rather than other important characteristics of speech, such as spectral features. This study aims to investigate differences in vocal emotion expression between children with normal hearing (NH children), children with cochlear implants (CI children) and children with hearing aids (HA children) with a more advanced speech analysis.
Methods
Two different analyses were performed. For the first analysis a machine learning (ML) model was developed to recognize emotions (happy, sad, angry) from speech. Prosodic (pitch, intensity and duration) and spectral features (MFCCs) were extracted from speech utterances. The model used a Support Vector Machine (SVM) classifier and was trained and tested for each group separately (NH children, CI children, HA children) and for all children with impaired hearing (HI children). For the second analysis pitch and intensity contours were made.
Results
A total of 828 speech utterances were used from 69 NH children, 420 from 35 CI children, and 624 from 52 HA children. The models achieved an overall accuracy of 57.3%, 35.7%, 32.3% and 34.6% for NH children, CI children, HA children and HI children respectively. Final model AUC-ROC values for NH children showed acceptable model performance, while AUC-ROC values for HI children showed poor performance, equivalent to random guessing. The contour analysis supported these outcomes. The pitch and intensity contours of NH children showed clear differentiation between emotions in contrast to CI and HA children.
Conclusion
The findings of this study highlighted significant differences in both global and local features of emotional expression in speech between NH children and HI children. ML models showed a better classification of emotions expressed by NH children compared to those with HI. Moreover, pitch and intensity contours showed that NH children exhibited distinct patterns when expressing emotions. In conclusion, HI children may face challenges in expressing emotions through speech