61
AI language models are rife with political biases
(www.technologyreview.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Not trying to be a smartass, but what's the alternative?
Does there have to be one? It'd be nice if there were, of course, but this is currently the only way we know of to make these AIs.
I guess shoving an encyclopedia into it. I'm not sure really, it is a good point. Perhaps AI bias is as inevitable as human bias...
Despite what you might assume, an encyclopedia wouldn't be free from bias. It might not be as biased as, say, getting your training data from a dump of 4chan, but it'd absolutely still have bias. As an on-the-nose example, think about the definition of homosexuality; training on an older encyclopedia would mean the AI now thinks homosexuality is a crime.
And imagine how badly most encyclopedias would reflect on languages and cultures other than the one that made them.
Well, you can focus on rule-based/expert system style AI, a la WolframAlpha. Actually build algorithms to answer questions that are based on scientific fact and theory, rather than an approximated consensus of many sources of dubious origin.
Ooo, old school AI 😍
In our current cultural consciousness, I'm not sure that even qualifies as AI anymore. It's all about neutral networks and machine learning nowadays.
The alternative is being extremely careful about what data you allow the LLM to learn from. Then it would have your bias, but hopefully that'll be a less flagrantly racist bias.