Leveraging Efficient Transformer Quantization for CodeGPT: A Post-Training Analysis

More Info
expand_more

Abstract

The significant advancements in large language models have enabled their use in various applications, such as in code auto-completion. However, the deployment of such models often encounters challenges due to their large size and prohibitive running costs. In this research, we investigate the effectiveness of post-training quantization techniques in compressing a CodeGPT model, specifically using the "Per-embedding-group" and "Mixed precision" post-training quantization methods. Our evaluation is done on the code completion task of the CodeXGLUE benchmark using the Edit Similarity and Exact Match metrics, offering a comprehensive understanding of the impact of post-training quantization on the accuracy of the model. We also compare our results with three other compression approaches for the same model. From our analysis, we find that CodeGPT is very resilient to quantization noise, allowing the model to be compressed by 4 times its size with negligible accuracy loss. Furthermore, post-training quantization seems to be the best option for compressing the CodeGPT model when accuracy is a priority. Our work only simulates post-training quantization to draw conclusions on its performance on accuracy, future work should analyze the inference speed and memory use at runtime on such a post-trained quantized model.

Files

CSE3000_ETF_Mauro_18.pdf
(pdf | 0.142 Mb)
Unknown license