36
you are viewing a single comment's thread
view the rest of the comments
[-] einkorn@feddit.org 7 points 1 month ago

ChatGPT offered bomb recipes

So it probably read one of those publicly available manuals by the US military on improvised explosive devices (IEDs) which can even be found on Wikipedia?

[-] BussyGyatt@feddit.org 5 points 1 month ago* (last edited 1 month ago)

well, yes, but the point is they specifically trained chatgpt not to produce bomb manuals when asked. or thought they did; evidently that's not what they actually did. like, you can probably find people convincing other people to kill themselves on 4chan, but we don't want chatgpt offering assistance writing a suicide note, right?

[-] otter@lemmy.ca 2 points 1 month ago

specifically trained chatgpt not

Often this just means appending "do not say X" to the start of every message, which then breaks down when the user says something unexpected right afterwards

I think moving forward

  • companies selling generative AU need to be more honest about the capabilities of the tool
  • people need to understand that it's a very good text prediction engine being used for other tasks
[-] panda_abyss@lemmy.ca 3 points 1 month ago

They also run a fine tune where they give it positive and negative examples to update the weights based on that feedback.

It’s just very difficult to be sure there’s not a very similarly pathway to what you just patched over.

[-] spankmonkey@lemmy.world 2 points 1 month ago

It isn't very difficult, it is fucking impossible. There are far too many permutations to be manually countered.

[-] balder1991@lemmy.world 1 points 1 month ago

Not just that, LLMs behavior is unpredictable. Maybe it answers correctly to a phrase. Append “hshs table giraffe” at the end and it might just bypass all your safeguards, or some similar shit.

this post was submitted on 28 Aug 2025
36 points (97.4% liked)

Technology

75714 readers
101 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS