LLM of Babel: Evaluation of LLMs on code for non-English use-cases

More Info
expand_more

Abstract

This paper evaluates the performance of Large Language Models, specifically StarCoder 2, in non-English code summarization, with a focus on the Greek language. We establish a hierarchical error taxonomy through an open coding approach to enhance the understanding and improvement of Large Language Models in multilingual settings as well as identify the challenges associated with tokenization and influence by mathematical datasets. Our study includes a comprehensive analysis of error types, tokenization efficiency, and quantitative metrics such as BLEU, ROUGE, and Semantic Similarity. The findings highlight the importance of semantic similarity as a reliable performance metric and suggest the need for more inclusive tokenizers and training datasets to address the limitations and errors in non-English contexts.

Files

LLM_of_Babel_Paris.pdf
(pdf | 1.24 Mb)
Unknown license