Beyond Acceptance Rates: The Impact of JetBrains AI Assistant and FLCC

Analysis of the behavior of users assisted by LLMs in 13 JetBrains IDEs

More Info
expand_more

Abstract

LLM (Large Language Model) powered AI (Artificial Intelligence) assistants are a popular tool used by programmers, but what impact do they have? In this thesis we investigate two such tools designed by JetBrains: AI Assistant and FLCC (Full Line Code Completion).
We collected over 40 million actions - including editing, code executions, and typing - in the form of metric data spread out over 26 thousand users. With this data, we look at how user behavior changes when assisted by AI Assistant or FLCC.

Users spent more time in their IDEs (Integrated Development Environment) and typed more when assisted. In most cases, we see a decline in (manual) testing, or at best an equivalent level. And how do multi-programming language benchmarks reflect acceptance rates by users for these respective languages? There seems to be no real correlation between these benchmark results and what users accept in their generations, but the available benchmarks are also limited.

In the end, more research is required to put these results in perspective, for now, it is still up in the air whether the changes in user behavior are positive or negative. But relating this to existing works the impact seems mostly positive on users of AI Assistant and FLCC.