78
submitted 3 days ago* (last edited 3 days ago) by FlakesBongler@hexbear.net to c/chapotraphouse@hexbear.net

ComradeSharkfucker, because I love you, I went and found these AI images from the brief period when Bing hardcoded "Ethnically Ambiguous" into its prompt field

you are viewing a single comment's thread
view the rest of the comments
[-] Owl@hexbear.net 16 points 3 days ago

They trained the early models on whatever images they found on the internet. People noticed that "doctor" made a white dude and "basketball player" made a black dude, and made a stink about the racism in this*. Microsoft went into cover-my-ass mode and put in a simple AI model that checks for if the prompt sounds like it's about people, and if it does, inserting "ethnically ambiguous" into the prompt before passing it to the image model**. The image models were shit, so they'd leak random parts of their prompt into any text on screen all the time.

* note that this wasn't some intentional racist thing, it's just a model thoughtlessly replicating the racism in its training data, after being made by thoughtless people who didn't foresee this problem (ie: exactly how racism usually works)

** IIRC, google's AI did the same thing, but instead of always adding "ethnically ambiguous", it had some percent chance of adding a random racial descriptor. So you'd ask for George Washington and get a 10% chance it made him black

this post was submitted on 21 Apr 2026
78 points (96.4% liked)

Chapotraphouse

14340 readers
465 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 5 years ago
MODERATORS