Moral values are often used as guidelines for human behaviour. The ability to identify moral values is important for social and ethical artificial intelligence. We address the difficulties of using contemporary natural language processing (NLP) techniques to classify moral values
...
Moral values are often used as guidelines for human behaviour. The ability to identify moral values is important for social and ethical artificial intelligence. We address the difficulties of using contemporary natural language processing (NLP) techniques to classify moral values in texts. As the classification criterion of moral values is subjective, it is often difficult to argue for the existence of a `ground truth' label. In such circumstances, we can learn from the (dis-)agreement among multiple annotators. However, it is expensive to consult everyone, especially when working with crowdsourcing. A way to reduce the annotation cost is to apply active learning, which uses query strategies to choose the data and annotator that are most valuable to consult. Further, to account for subjectivity, we want to ensure the dataset is labelled by a diverse set of annotators. Therefore, we propose an annotator selection method for active learning. When given an unlabelled text, this method selects an annotator that has labelled the least amount of texts that are similar to the given text. The evaluation results show that the method performs better on datasets with balanced annotator distribution.