To the point of burnout: open source developers annoyed by AI bug reports
On closer inspection, open source maintainers are discovering that more and more bug reports are AI nonsense.
(Image: Andrii Yalanskyi/Shutterstock.com)
At first glance, it's just a bug report from a friendly user: "Curl is software that I love and is an important tool for the world. If my report is not accurate, I apologize," writes the sender and goes on to provide a well-structured analysis of a supposed security vulnerability, including code. Daniel Stenberg, maintainer of Curl and Libcurl, then thanks him for this and asks about the submission. But then things get bizarre: in his reply, the humble sender begins to get entangled in inconsistencies. It quickly becomes clear that this is an AI at work that reacts in a typical way when errors in its statements are pointed out.
Maintainers of well-known open source projects are seeing cases like this more and more frequently. Some are even talking about a flood of low-quality input, such as Stenberg in an article in The Register. Nevertheless, compared to classic spam, AI reports are not always recognizable as such at first glance and require checking. This takes time and slows down the projects, which are often managed on a voluntary basis, some of the maintainers criticize in blog posts.
Are maintainers at risk of burnout?
Seth M. Larson from the Python Software Foundation even fears that those who take care of the input for bugs will burn out in the long run if they have to deal with more and more AI spam. In a blog post, he argues that they should be treated as if they were submitted with malicious intent, even if the AI was actually used to save work.
Videos by heise
Larson advises projects to protect themselves: entry barriers such as CAPTCHA puzzles could stop the automated software. Restrictions on the number of reports could also help. He also advises making the names of the senders of AI reports public so that those behind them reconsider their actions out of shame.
Bug reporters should keep their hands off AI
According to Larson, bug reporters should avoid using AI systems to make submissions. They should also avoid experimenting with volunteers from open source projects. As a general rule, he advises that no report should be submitted that has not first been reviewed by a human. "That review time should be invested by you first, not by open source volunteers."
Curl maintainer Stenberg has been observing the trend towards AI-generated vulnerability reports for about a year. He finds it particularly annoying when a human doesn't even respond to inquiries, but instead the AI is sent ahead to speak to the maintainer – with the aforementioned result that such dialogs quickly end in nonsense.
(mki)