Evaluating the effectiveness of large language models in meeting summarization with transcript segmentation techniques

How well does gpt-3.5-turbo perform on meeting summarization with topic and context-length window segmentation?

More Info
expand_more

Abstract

Large Language Models (LLM) have brought significant performance increase on many Natural Language Processing tasks. However LLMs have not been tested for meeting summarization. This research paper examines the effectiveness of the gpt-3.5-turbo model on the meeting summarization domain. However due to input length limitations, it cannot be applied directly to this task. Thus the paper investigates two segmentation methods: a simple context-length window approach and topic segmentation using Latent Dirichlet Allocation (LDA). The context-length window approach's performance is close to the Pointer Generator framework. The topic segmentation gives worse results. Overall gpt-3.5-turbo performs worse with both approaches than state-of-the-art models which use a transformer architecture adapted for long documents.