Performance analysis of the state-of-the-art NLP models for predicting moral values
More Info
expand_more
Abstract
Moral values are instrumental in understanding people's beliefs and behaviors. Estimating such values from text would facilitate the interaction between humans and computers. To date, no comparison between NLP models for predicting moral values from text exists. This paper addresses this by comparing LSTM and more novel models such as BERT and fastText to evaluate their capabilities for predicting moral values. Twitter Corpus, a collection of 35000 Tweets containing relevant recent political and social events, is chosen for this purpose. The results show that novel solutions outperform long-established ones. BERT is proven to be the best model for this task, but long training times hinder its practicality. By contrast, fastText offers similar performance while being orders of magnitude faster.