Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions

More Info
expand_more

Abstract

Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \beginenumerate∗[label=(\arabic∗)] \item self-reported free-text descriptions of triggered memories and \item a video's audiovisual content \endenumerate∗. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.

Files

P3_Personal_Memories_as_Contex... (pdf)
(pdf | 0.748 Mb)
Unknown license

Download not available

3382507.3418814.pdf
(pdf | 1.11 Mb)
- Embargo expired in 08-04-2022
Unknown license