432
you are viewing a single comment's thread
view the rest of the comments
[-] AkatsukiLevi@lemmy.world 44 points 2 days ago

I still don't get it, like, why tf would you use AI for this kind of thing? It can barely make a basic python script, let alone actually handle a proper codebase or detect a vulnerability, even if it is the most obvious vulnerability ever

[-] emzili@programming.dev 38 points 2 days ago

It's simple actually, curl has a bug bounty program where reporting even a minor legitimate vulnerability can land you at a minimum $540

[-] Taleya@aussie.zone 2 points 1 day ago

If they ever actually identify one, make a very public post stating that as this was identified using AI there will be no bounty paid.

What are the odds that you're actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.

Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it's possible that there's some more malicious idea behind it.

[-] psivchaz@reddthat.com 2 points 21 hours ago

AI could probably find the occasional actual bug. If you use AI to file 500 bug reports in the time it may take a researcher to find and report 1, and only 2 pay out, you've still gotten ahead.

But in the process, you've wasted tons of time for the developers who have to actually sort through, read the reports, and verify the validity of the issue. I think that's part of the problem. Even if it sometimes finds a legitimate issue, these people are trying to make it someone else's problem to do the real work.

[-] BatmanAoD@programming.dev 2 points 1 day ago

The user who submitted the report that Stenberg considered the "last straw" seems to have a history of getting bounty payouts; I have no idea how many of those were AI-assisted, but it's possible that by using an LLM to automate making reports, they're making some money despite having a low success rate.

[-] CandleTiger@programming.dev 3 points 1 day ago

Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am

Yes. That is the problem being reported in this article. There are many many people who have complete and unblemished optimism about how useful LLMs are, to the point where they don’t understand it’s optimism and don’t understand why other people won’t take them seriously.

Some of them are professionals in related fields

[-] massive_bereavement@fedia.io 37 points 2 days ago

Scenario: I wanna land a sweet security job, but I don't want to have to work for it.

[-] kadup@lemmy.world 10 points 2 days ago* (last edited 2 days ago)

We have several scientific articles being published and later found to have been generated via AI.

If somebody is willing to ruin their academic reputation, something that takes years to build, don't you think people are also using AI to cheat at a job interview and land a high paying IT job?

[-] milicent_bystandr@lemm.ee 4 points 2 days ago

I think it might be the developers of that AI, letting their system make bug reports to train it, see what works and what doesn't (as is the way with training AI), and not caring about the people hurt in the process.

this post was submitted on 07 May 2025
432 points (99.8% liked)

Linux

7260 readers
356 users here now

A community for everything relating to the GNU/Linux operating system

Also check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS