88
submitted 2 months ago by JRepin@lemmy.ml to c/technology@lemmy.ml

Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4,5,6,7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.

top 6 comments
sorted by: hot top controversial new old
[-] FunderPants@lemmy.ca 21 points 2 months ago

Very interesting results, especially with the rise of AI Bot driven interviews who could filter people out over it.

[-] drwho@beehaw.org 8 points 2 months ago

Maybe that's why they're getting so popular.

[-] sunzu2@thebrainbin.org 14 points 2 months ago

Train the model primaryly on data from a country with super strong slaver mentality that's seeped into every aspect of society to where poor peasants are biggest racist with social skills to drop the racism into every situation with subtle hint while looking like a good guy, AI learns how to do the same

Pikachu face 🤡

[-] LucidBoi@lemmy.dbzer0.com 9 points 2 months ago

People are biased and prejudiced. People make LLMs. LLMs get trained on data created by people. LLMs become biased. Where's the surprise xD

Automating prejudice.

[-] rimu@piefed.social 1 points 2 months ago

Pretty good discussion about this on Mastodon - https://friend.camp/@aparrish/113053044485254385

this post was submitted on 29 Aug 2024
88 points (96.8% liked)

Technology

34806 readers
242 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS