61
AI language models are rife with political biases
(www.technologyreview.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
White christian men is an awfully specific thing for the model to be sensitive towards IMO.
Right-wing media is perceived to be funded by white christian men, so if that is the source of the data then I'm not too surprised their writing and articles would protect themselves - but still intriguing how the model picked up on this from online discussions & news data, and was sensitive to hate speech aimed at that group specifically, compared with the Left data which appears more inclusive - although this is probably indicative of the bias they're studying in the article
I mean, hate speech aimed at left-wing people is more diverse generally than hate speech aimed at right-wing people because the left simply is more diverse in gender, orientation, ethnicity, religion, etc. Isn't that universally accepted?
(Please correct me if I'm wrong, I approach in good faith!)
I don't think you're wrong at all tbh - from my perspective the left is always going to be more diverse, whereas the right isn't very inclusive by default unless you "fit in" IMO