Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions
More Info
expand_more
Abstract
Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \beginenumerate∗[label=(\arabic∗)] \item self-reported free-text descriptions of triggered memories and \item a video's audiovisual content \endenumerate∗. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.
Files
Download not available