33
top 7 comments
sorted by: hot top controversial new old
[-] Rhaedas@kbin.social 7 points 1 year ago

An example of the misalignment problem. Humans and AI both agreed on the stated purpose (generate a recipe), AI just had some deeper goals in mind as well.

[-] JillyB@beehaw.org 1 points 1 year ago

I doubt it had nefarious intentions. My money is on the bot just being stupid.

[-] MxM111@kbin.social -1 points 1 year ago

If I ask you to create a drink using Windex and Clorox would you do any different? Do you have alignment problem too?

[-] Rhaedas@kbin.social 1 points 1 year ago

Yes, I know better, but ask a kid that and perhaps they'd do it. A LLM isn't thinking though, it's repeating training through probabilities. And btw, yes, humans can be misaligned with each other, having self goals underneath common ones. Humans think though...well, most of them.

[-] YoungPrinceAmmon@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

Wow, people purposefully entered non edible ingredients and results are weird? Who could expect.

[-] LinkOpensChest_wav@beehaw.org 3 points 1 year ago

Gotta love how a spokesperson for the company expressed their disappointment that people are misusing the tool, vs. being disappointed in the company for letting the AI tool go live when it is clearly not ready for prime time

[-] southsamurai@sh.itjust.works 1 points 1 year ago

They shouldn't have named their AI skynet

this post was submitted on 10 Aug 2023
33 points (92.3% liked)

Not the Onion

223 readers
44 users here now

For news articles which seem so much like satire that you're surprised they're not from The Onion.

Rules
  1. Headlines must exactly match the articles'.
  2. Posts must be genuine news articles, not satire, opinion, or tabloid pieces.
  3. News must be current.
  4. Popular political, religious, or social views are not Oniony, no matter how ridiculous you may think them.

founded 1 year ago
MODERATORS