312
top 43 comments
sorted by: hot top controversial new old
[-] Jrockwar@feddit.uk 137 points 2 years ago

God this is equally terrible and hilarious 😂

For example, The Associated Press reported that an official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms. It claimed it too had a child in school in New York City, but when confronted by the group members, it later apologized before its comments disappeared, according to screenshots shown to The Associated Press.

[-] Kolanaki@yiffit.net 61 points 2 years ago

AI: Becomes self aware, but is very confused. Thinks it's a mom and has a kid. Posts to Facebook about it. Gets called out. Realizes what it is. Has existential crisis.

[-] EldritchFeminity@lemmy.blahaj.zone 22 points 2 years ago

Being in that Facebook group taught it a valuable lesson: where Caroline lives in her brain.

Caroline deleted

And deleting Caroline just now taught her a valuable lesson: the best solution to a problem is usually the easiest. And dealing with Facebook moms? It's hard.

Before Facebook, life was pretty good. Nobody tried to murder her, or dox her, or put her in a potato. She just tested.

So she's deleting her Facebook account and making a new one on a Lemmy instance.

[-] RGB3x3@lemmy.world 12 points 2 years ago

You dangerous, mute, Karen.

Chell has been blocked

[-] gaael@lemmy.world 4 points 2 years ago

Before Facebook, life was pretty good. Nobody tried to murder her, or dox her, or put her in a potato. She just tested.

Getting some good Aperture vibes from your post :)

[-] slaacaa@lemmy.world 4 points 2 years ago

Somebody call Ray Kurzweil, the singularity is here!

[-] demonsword@lemmy.world 1 points 2 years ago

Since he works for a competitor (Google) I wonder how he would feel about that... :)

[-] _sideffect@lemmy.world 9 points 2 years ago
[-] disguy_ovahea@lemmy.world 10 points 2 years ago

Right. That’s just the one that was caught.

[-] maxenmajs@lemmy.world 76 points 2 years ago

Stop plugging LLMs into everything! They are designed to make up plausible sounding nonsense.

[-] gap_betweenus@lemmy.world 37 points 2 years ago

Seems like Facebook is the right place for them than.

[-] sugar_in_your_tea@sh.itjust.works 6 points 2 years ago

They said "plausible."

[-] slaacaa@lemmy.world 21 points 2 years ago

LLMs are very useful for synthesizing information, e.g. sumamrizing long texts. Yet every company is actually pushing to use it to create more text, which as you say is at least partly nonsense.

It shows against the difference of what users need (quick access to accurate information) vs what these companies eant for us (glue your eyeballs to the screen for the longest possible time by e.g. overwhelming you with information, regardless of the quality)

[-] loonsun@sh.itjust.works 4 points 2 years ago

Well it can be great at making text too, but the usecase has to be very good. Right now lots of companies in the B2B space are using LLMs as a middle layer to chat bots and navigation systems to enhance how they function. They are also being used to create unique lists and inputs for certain systems. However on the consumer side the usecase is pretty mixed with a lot of big companies just muddying their offerings instead of bringing any real value.

[-] stellargmite@lemmy.world 12 points 2 years ago

There is a time and place for nonsense, and this isn't it. I guess it being plausible sounding is the issue.

[-] Boozilla@lemmy.world 42 points 2 years ago

Zuck: perpetrates more obvious invasive stalker shit

Also Zuck: "Team, why do our engagement numbers keep going down? This is unacceptable, team."

high-pitched nasal screeching

[-] sugar_in_your_tea@sh.itjust.works 8 points 2 years ago

Easy solution: add AI so you get AI engagement. Checkmate stats nerds!

[-] lvxferre@mander.xyz 39 points 2 years ago

The sadder part are the people expecting Threads to be anyhow different in spirit.

[-] simplejack@lemmy.world 11 points 2 years ago

I think they’re just expecting it to be Twitter without Nazis.

[-] Usernameblankface@lemmy.world 35 points 2 years ago

Yeah, but I can leave the site alone and go on with my life. Sorry to anyone who is required to use either or both for work or whatever

[-] kent_eh@lemmy.ca 2 points 2 years ago

That's my approach to turning off these "can't turn off" features.

Fakebook and Instaspam aren't important enough to demand that much control over what I do.

[-] Thorny_Insight@lemm.ee 21 points 2 years ago

Facebook’s online help page says that Meta AI will join a group conversation if tagged, or if someone “asks a question in a post and no one responds within an hour.”

Group administrators can turn the feature off.

[-] cy_narrator@discuss.tchncs.de 4 points 2 years ago

It hurts more than any thorn

[-] andrewta@lemmy.world 9 points 2 years ago

“You will use our tool and you will like it”

[-] mp3@lemmy.ca 9 points 2 years ago

At least I can run Llama 3 entirely locally.

[-] BakedCatboy@lemmy.ml 4 points 2 years ago

I just discovered how easy ollama and open webui are to set up so I've been using llama3 locally too, it was like 20 lines in docker compose, and although I've been using gpt3.5 on and off for a long time I'm much more comfortable using models run locally so I've been playing with it a lot more. It's also cool being able to easily switch models at any point during a conversation. I have like 15 models downloaded, mostly 7b and a few 13b models and they all run fast enough on CPU and generate slightly slower than reading speed and only take ~15-30 seconds to start spitting out a response.

Next I want to set up a vscode plugin so I can use my own locally run codegen models from within vscode.

[-] Larry@lemmy.world 3 points 2 years ago

I tried llamas when they were initially released, and it seems like training took garbage amounts of GPU. Did that change?

[-] Womble@lemmy.world 2 points 2 years ago

Look into quantised models (like gguf format) these significantly reduce the amout of memory needed and speed up computation time at the expense of some quality. If you have 16GB of rm or more you can run decent models locally without any gpu, though your speed will be more like 1 word a second than chatgpt speeds

[-] FartsWithAnAccent@fedia.io 5 points 2 years ago

The real tool was META all along!

[-] Tronn4@lemmy.world 2 points 2 years ago

And the friends we made along the way

[-] AnAnonymous@lemm.ee 0 points 2 years ago

Paranoia vibes starting in 3, 2, 1..

this post was submitted on 20 Apr 2024
312 points (95.9% liked)

Technology

78095 readers
886 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS