317

The logical end of the 'Solution to bad speech is better speech' has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back

you are viewing a single comment's thread
view the rest of the comments
[-] mojo@lemm.ee 64 points 1 year ago

Just a reminder, LLMs are not designed to provide truth, but rather naturally sounding word generation.

[-] tehmics@lemmy.world 3 points 1 year ago

We can certainly argue over what they're designed to do, and I definitely agree that's the goal of them. The reality though is that on some level it is impossible to separate assertions from the words that describe them. Language itself is designed to communicate ideas, you can't really create language without also communicating ideas, otherwise every sentence from an LLM would just look like

"Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like"

They will readily cite information that was fed to them. Sometimes it is on point, sometimes not. That starts to be a bit of an ethical discussion on whether it is okay for them to paraphrase information they were fed, and without citing it as a source of the info.

In a perfect world we should be able to expand a whole learning tree to trace back how the model pieced together each word and point of data it is citing, kind of like an advanced Wikipedia article. Then you could take the typical synopsis that the model provides and dig into it to judge for yourself if it's accurate or not. From a research standpoint I view info you collect from a language model as a step down from a secondary source and we should be able to easily see how it gets to that info.

[-] turmacar@lemmy.world -1 points 1 year ago

LLMs are at least a quaternary(?) source. They're scraping secondary/tertiary sources. As such they're little better than asking passersby on the street. You might get a general idea of what the zeitgeist is, but how true any particular statement actually is will vary wildly.

Math itself is designed to describe relationships between things. That doesn't mean you can't mock up a 'reasonable seeming' equation that is absolute nonsense after further examination, but that a layman will take as 'true enough'.

LLMs don't cite things. They provide an approximation of what a human might write. They don't know what they're writing or how it relates to the 'real world' any more than any other centerpiece of a Chinese Room.

this post was submitted on 18 Sep 2023
317 points (95.7% liked)

World News

38978 readers
1601 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS