48
submitted 3 weeks ago* (last edited 3 weeks ago) by Pro@programming.dev to c/technology@lemmy.world
all 37 comments
sorted by: hot top controversial new old
[-] AstralPath@lemmy.ca 14 points 3 weeks ago

Honestly, I've always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.

[-] ouch@lemmy.world 4 points 3 weeks ago

What about false positives? Or a process to challenge them?

But yes, I agree with the general idea.

[-] beejjorgensen@lemmy.sdf.org 6 points 3 weeks ago

Or a process to challenge them?

😂😂😂😔

[-] tarknassus@lemmy.world 4 points 3 weeks ago

They will probably use the YouTube model - “you’re wrong and that’s it”.

[-] HowAbt2day@futurology.today 3 points 3 weeks ago
[-] blargle@sh.itjust.works 2 points 3 weeks ago

Not sufficiently fascist leaning. It's coming, Palantir's just waiting for the go-ahead...

[-] brorodeo@lemmy.ca 3 points 2 weeks ago

Bsky already does that.

[-] towerful@programming.dev 1 points 2 weeks ago

Yup.
It's a traumatic job/task that gets farmed to the cheapest supplier which is extremely unlikely to have suitable safe guards and care for their employees.

If I were implementing this, I would use a safer/stricter model with a human backed appeal system.
I would then use some metrics to generate an account reputation (verified ID, interaction with friends network, previous posts/moderation/appeals), and use that to either: auto-approve AI actions with no appeals (low rep); auto-approve AI actions with human appeal (moderate rep); AI actions must be approved by humans (high rep).

This way, high reputation accounts can still discuss & raise awareness of potentially moderatable topics as quickly as they happen (think breaking news kinda thing). Moderate reputation accounts can argue their case (in case of false positives). Low reputation accounts don't traumatize the moderators.

[-] head_socj@midwest.social 1 points 3 weeks ago

Agreed. These jobs are overwhelmingly concentratedin developing nations and pay pathetic wages, too.

[-] Treczoks@lemmy.world 13 points 3 weeks ago

Only if the also take the full legal responsibility for the AIs actions.

[-] muusemuuse@lemm.ee 15 points 2 weeks ago

They don’t even take responsibility for things now.

[-] utopiah@lemmy.world 6 points 2 weeks ago

The business model IS dodging any kind of responsibility so... yeah, I think they'll pass.

[-] philpo@feddit.org 7 points 3 weeks ago

In the other news: Meta pays another 3 billion Euro due to not following the DSA and getting banned in Europe.

[-] melsaskca@lemmy.ca 4 points 3 weeks ago

I think AI is positioned to make better decisions than execs. The money saved would be huge!

[-] mitrosus@discuss.tchncs.de 2 points 2 weeks ago

The money saved goes where?

[-] melsaskca@lemmy.ca 6 points 2 weeks ago

It goes to pay off the debt of all of the nations in the world and will then usher in a new age of peace, obviously.

[-] mitrosus@discuss.tchncs.de 2 points 2 weeks ago

Haha. That says it all.

[-] Ulrich@feddit.org 4 points 3 weeks ago

Well hey that actually sounds like a job AI could be good at. Just give it a prompt like "tell me there are no privacy issues because we don't care" and it'll do just that!

[-] fullsquare@awful.systems 3 points 3 weeks ago

moderation on facebook? i'm sure it can be found right next to bigfoot

(other than automated immediate nipple removal)

[-] TransplantedSconie@lemm.ee 3 points 3 weeks ago* (last edited 3 weeks ago)

Meta:

Here, AI. Watch all the horrible things humans are capable of and more for us. Make sure nothing gets through.

AI:

becomes SKYNET

[-] HowdWeGetHereAnyways@lemmy.world 7 points 3 weeks ago

Ouija boards made of databases don't really think

[-] And009@lemmynsfw.com 0 points 3 weeks ago
[-] HowdWeGetHereAnyways@lemmy.world 3 points 3 weeks ago* (last edited 3 weeks ago)

No, they give you an answer that should sound correct enough to enable them to score a positive interaction.

Why do you think so many GPT answers seem plausible but don't work? Because it has very very little actual logic

[-] utopiah@lemmy.world 1 points 2 weeks ago

very very little actual logic

To be precise, 0.

[-] And009@lemmynsfw.com 0 points 3 weeks ago

Expecting current gen tool to be as smart as humans? Doesn't mean they're useless. They can translate words to images and explain art in terms of business.

They add capabilities not replace.

[-] HowdWeGetHereAnyways@lemmy.world 2 points 2 weeks ago

I don't disagree, but this is a wildfire of interest right now and there's a lot of people not recognizing this facet of how gpts operate. You have to really vocally recognize it's weakness so it can be mitigated (hopefully).

[-] leftzero@lemmynsfw.com 2 points 2 weeks ago

They add capabilities not replace.

They poison all repositories of knowledge with their useless slop.

They are plummeting us into a dark age which we are unlikely to survive.

Sure, it's not the LLMs fault specifically, it's the bastards who are selling them as sources of information instead of information-shaped slop, but they're still being used to murder the future in the name of short term profits.

So, no, they're not useless. They're infinitely worse than that.

[-] And009@lemmynsfw.com 1 points 2 weeks ago
[-] leftzero@lemmynsfw.com 1 points 2 weeks ago

Hear me out, Eliza. It'll be equally useless and for orders of magnitude less cost. And no one will mistakenly or fraudulently call it AI.

[-] MITM0@lemmy.world 3 points 3 weeks ago

That's gonna end well😉

[-] henfredemars@infosec.pub 3 points 3 weeks ago* (last edited 3 weeks ago)

Great move for Facebook. It'll let them claim they're doing something to curb horrid content on the platform without actually doing anything.

[-] pelespirit@sh.itjust.works 3 points 3 weeks ago* (last edited 3 weeks ago)

This might be the one time I'm okay with this. It's too hard on the humans that did this. I hope the AI won't "learn" to be cruel from this though, and I don't trust Meta to handle this gracefully.

[-] chrash0@lemmy.world 4 points 3 weeks ago

pretty common misconception about how “AI” works. models aren’t constantly learning. their weights are frozen before deployment. they can infer from context quite a bit, but they won’t meaningfully change without human intervention (for now)

[-] masterofn001@lemmy.ca 1 points 3 weeks ago

I mean, you could hire people who would otherwise enjoy the things they moderate. Keep em from doing shit themselves.

But, if all the sadists, psychos, and pedos were moderating, it would be reddit, I guess.

[-] homesweethomeMrL@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

Oh man, I may have to stop using this fascist sewer hose.

[-] PattyMcB@lemmy.world 1 points 3 weeks ago

A bold strategy, Cotton

this post was submitted on 31 May 2025
48 points (98.0% liked)

Technology

71762 readers
1397 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS