Generative AI services can be used to generate generic text fragments, incredible images, and even scripts for various programming languages. However, when LLMs are used to produce problematic or meaningless reports, the results can be largely detrimental to the development of the project.
Daniel Stenberg, the original author and lead developer of Curl software, recently wrote about the problematic impact of LLMs and AI models on projects. The Swedish programmer noted that the team has a bug bounty program that rewards hackers who find security issues with real money**, but superficial reports created through AI services are becoming a real problem.
To date, Curl's bug bounty program has paid out $70,000 in bounties, Sternberg said. The programmer received a total of 415 vulnerability reports, 77 of which were"Informational"report, 64 were eventually identified as security issues. A significant proportion (66%) of the reported issues are neither security issues nor ordinary vulnerabilities.
Generative AI models are increasingly being used (or are proposed to be used) as a way to automate complex programming tasks, but LLMs are known for"Hallucinations"and a remarkable ability to deliver meaningless results, while sounding absolutely confident in its output. In Sternberg's own words, AI-based reports look better and seem to make sense, but"Better garbage"Still garbage.
Sternberg said programmers would have to spend more time and effort on such reports before they could close them. AI-generated junk doesn't help the project at all because it takes away from developers' time and energy, preventing them from doing productive work. The curl team needs to properly investigate every report, and AI models can exponentially reduce the time required to write bug reports that may not exist at all.
Sternberg cites two fake reports that are likely to have been created by artificial intelligence. The first report claims to describe a real security flaw (CVE-2023-38545), which hasn't even been disclosed, but it's full of it"Typical artificial intelligence-style hallucinations"。Sternberg said that facts and details from old security issues are mixed together to form a match with reality"Unrelated"Something new.
Another recently filed report on hackerone describes a potential buffer overflow vulnerability in websocket handling. Sternberg tried to ask some questions about the report, but he eventually concluded that the vulnerability was not real and that he was most likely talking to an AI model rather than a real person.
Artificial intelligence can do it, the programmer said"Lots of good things", but it can also be exploited to do the wrong thing. Theoretically, LLM models could be trained to report security issues in a productive way, but we still need to find this"Good example"。AI-generated reports will become more common over time, so teams must learn how to trigger better, Sternberg said"Artificial intelligence generated"signal, and quickly dismiss those false reports.