Modelling retrospective evaluation of situational interdependence in conversations from its estimated real-time evaluation
More Info
expand_more
Abstract
Understanding how users retrospectively evaluate their interactions with adaptive intelligent systems is crucial to improving their behaviours during interactions. Prior work has shown the potential to predict retrospective evaluations based on different real-time aspects of conversations, such as verbal cues and non-verbal behaviours. However, the relationship between how one retrospectively evaluates and the real-time evaluations in the moment of conversations remains unclear. This study investigates the relationship between real-time evaluations of a situation, using the Situational Interdependence Scale (SIS) framework, and its retrospective evaluations. We investigate the presence of the peak-end rule and a complex relationship that could be modelled using Long Short-Term Memory (LSTM) for each SIS dimension using the PACO dataset. Due to the absence of ground truth for real-time SIS evaluations, we also present a methodologically sound technical approach to utilize a Large Language Model (LLM) to estimate values for each SIS dimension for each spoken utterance in conversations. Analysis of the experiments revealed the absence of both the peak-end rule and an LSTM-modelled relationship across all dimensions of SIS. However, both types of models at least predict better than the average of the estimated real-time evaluation. This may be largely due to the inaccuracy of the estimated real-time SIS evaluations and the limited LLM’s capability of labelling real-time SIS in conversational data. Future works may focus on improving the annotation of real-time SIS evaluations through human annotation or human-supervised few-shot learning of LLM, using other modalities in combinations with verbal content, and exploring other predictive models.