We investigate the potential for abuse of recent AI advances by developing seven malware programs and two attack tools using ChatGPT, OpenAI Playground's "text-davinci-003"model, and Auto-GPT - an open-source AI agent capable of generating automated prompts to accomplish user-def
...
We investigate the potential for abuse of recent AI advances by developing seven malware programs and two attack tools using ChatGPT, OpenAI Playground's "text-davinci-003"model, and Auto-GPT - an open-source AI agent capable of generating automated prompts to accomplish user-defined goals. We confirm that: 1) Under the safety and moderation control of recent AI systems, it is possible to generate the functional malware and attack tools (up to about 400 lines of code) within 90 minutes, including the debugging time. 2) Auto-GPT does not ease the hurdle of generating the right prompts for malware generation, but it evades the safety controls of OpenAI with its automatically generated prompts. When given goals with sufficient details, it writes the code in nine of nine malware and attack tools we tested. 3) There is still room to improve the moderation and safety controls of ChatGPT and text-davinci-003 model, especially for the growing jailbreak prompts. Overall, we find that recent AI advances, including ChatGPT, Auto-GPT, and text-davinci-003, demonstrate the potential for generating malware and attack tools under safety and moderation control, highlighting the need for improved safety measures and enhanced safety controls in AI systems. @en