39
top 6 comments
sorted by: hot top controversial new old
[-] hello_hello@hexbear.net 5 points 1 day ago

All closed dataset LLMs must have absorbed large amounts of CSAM in their training data and that thought doesnt horrify people enough.

[-] lime@feddit.nu 1 points 20 hours ago

probably, but even if not they can synthesize it no problem. an llm is an engine that allows you to do math on concepts, so if you multiply together two completely unrelated things you get a combination even if one does not exist in the dataset.

[-] Belly_Beanis@hexbear.net 1 points 22 hours ago

These chat bots had to have been trained on actual conversations on Facebook/Instagram/whatever. As in, Zuckerberg and his goons know children are being preyed upon using their platforms, but they do nothing about it except use it as a source for data.

[-] Philosoraptor@hexbear.net 6 points 1 day ago

“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”

internet-delenda-est

[-] BelieveRevolt@hexbear.net 6 points 1 day ago

Holy shit, those ”acceptable” prompts are gross, someone needs to check the laptops of everyone at Meta HQ.

The only difference between the ”acceptable” racist prompt and the ”unacceptable” one is one sentence too.

[-] Philosoraptor@hexbear.net 3 points 1 day ago

And the cutoff for overt sexual talk is 13!

this post was submitted on 16 Aug 2025
39 points (100.0% liked)

technology

23915 readers
242 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS