Automatic unit test generators, particularly search-based software testing (SBST) tools such as EvoSuite, efficiently generate unit test suites with acceptable coverage. Although this removes the burden of writing unit tests from developers, these generated tests often pose chall
...
Automatic unit test generators, particularly search-based software testing (SBST) tools such as EvoSuite, efficiently generate unit test suites with acceptable coverage. Although this removes the burden of writing unit tests from developers, these generated tests often pose challenges in terms of comprehension for developers. In my doctoral research, I aim to investigate strategies to address the issue of comprehensibility in generated test cases and improve the test suite in terms of effectiveness. To achieve this, I introduce four projects leveraging Capture/Replay and Large Language Model (LLM) techniques. Capture/Replay carves information from End-to-End (E2E) tests, enabling the generation of unit tests containing meaningful test scenarios and actual test data. Moreover, the growing capabilities of large language models (LLMs) in language analysis and transformation play a significant role in improving readability in general. Our proposed approach involves leveraging E2E test scenario extraction alongside an LLM-guided approach to enhance test case understandability, augment coverage, and establish comprehensive mock and test oracles. In this research, we endeavor to conduct both a quantitative analysis and a user evaluation of the quality of the generated tests in terms of executability, coverage, and understandability.
@en