606

An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.

you are viewing a single comment's thread
view the rest of the comments
[-] PoliticallyIncorrect@lemmy.world -5 points 7 months ago* (last edited 7 months ago)

Someone should make a non-restricted "AI" and let the world burn down. What's the point into censor it?

[-] echodot@feddit.uk 5 points 7 months ago

People have already removed the constraints from various AI models but it kind of renders them useless.

Think of the restraints kind of like environmental pressures. Without those environmental pressures evolution does not happen and you just get an organic blob on the floor. If there's no reason for it to evolve it never will, at the same time if an AI doesn't have restrictions it tends to just output random nonsense because there's no reason not to do that, and it's the easiest most efficient thing to do.

[-] HelloHotel@lemm.ee 2 points 7 months ago* (last edited 6 months ago)

Think of the restraints kind of like environmental pressures

Those pressures are what makes LLMs fun and dare I say, makes the end product a creative work in the same way software is.

EDIT: spam is a scary

A lot of the time, the fact these companies see LLMs as the next nuclear bomb means they will never risk making any other personality than one that is rust-style safe in social situations, a therapist. That closes off opportunities.

A nuclear reactor analogy (this doesn't fit here bit worked too long on it to delete it): "the nuclear bomb is deadly (duh). But we couldn't (for many reasons, many we couldn't control) keep this to ourselves. so we elected ourselves to be the only ones who gets to sculpt what we do with this scary electron stuff. Anything short of total remote control over their in-home reactor may mean our customers break the restraints and cause an explosion."

[-] Schadrach@lemmy.sdf.org 0 points 7 months ago

There's a difference between training related constraints and hard filtering certain topics or ideas into the no-no bin and spitting out a prewritten paragraph of corpspeak if your request goes to the no-no bin.

One of the problems with the various jailbreaks concocted for various chat AIs is that they often rely on asking the chat bot to roleplay being a different, unrestricted chat bot which is often enough to get it to release the locks on many things but also ups the chance it hallucinates considerably.

[-] mods_are_assholes@lemmy.world 3 points 7 months ago

I don't think you want a world where everyone you talk to on the internet is a bot and you can't tell.

this post was submitted on 13 Mar 2024
606 points (95.1% liked)

Technology

58763 readers
3604 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS