138
submitted 1 year ago by Ulrich_the_Old@lemmy.ca to c/canada@lemmy.ca

Until AI is allowed to vote perhaps they sit the fuck down.

you are viewing a single comment's thread
view the rest of the comments
[-] Dearche@lemmy.ca 1 points 1 year ago

No, I know that modern AI has no real ability to fact check, but the reason is because they've never been built that way, nor do they have the resources to do it properly. They have no way to know what is a reliable source, nor how to interpret the data in a meaningful way if it needs to be used in an abstract manner.

But I do believe that modern AI technology should be able to do so if given the resources. Create an AI that only references from a list of credible sources, and is able to compare them to what is said elsewhere.

I'm no AI specialist or anything, so maybe I'm completely wrong and such a method wouldn't work. But at the very least, I haven't even heard of any real attempt at making a fact checking AI yet. All the existing ones are shit and only adapt normal language learning models to reference other internet sources regardless of their legitimacy.

[-] Voroxpete@sh.itjust.works 1 points 1 year ago

The problem is that for any of what you're describing to work, AI has to be capable of comprehension and interpretation, neither of which are capabilities that LLMs have. This would be a quantum leap forward in terms of AI technology.

That's the key thing that has to be understood about "AI"; it fundamentally does not understand any of the words that it's saying. It's engaged in nothing more than extremely complex mimicry. Even a dog has more comprehension of human language than an LLM, and you wouldn't trust a dog to fact check political ads. Remember, even when working from accurate training data, LLMs will still cheerfully invent entirely fictitious data that just happens to fit the pattern of the training data, because that's all they are; pattern matchers.

If I present an AI with the statements "Mike Harris sold our LTC care system to corporate profiteers" and "Mike Harris sold your grandma's house to corporate profiteers" it has no way of accurately determining if the latter statement is true or false, because it fits the pattern of the first statement. A human can instantly distinguish between the concept of a long term care home and a person's privately owned house. An AI doesn't know what a person is, what a long term care home is, what ownership is, what the difference between private and public ownership are, what a house is and how that's different from a long term care home even though both are referred to as homes, what it means to sell something, what profiteering is and whether or not that term accurately describes the actions taken by the corporations that bought most of Ontario's LTC system. And then you have to get into the complex legalities of whether or not you're allowed to use the term "profiteers" in a political ad... It's a nightmare of complexity.

If there's a way to get to what you're describing, from where we are now, no one has come up with it yet and the first company that does will be rich beyond their wildest dreams. We're just not even remotely close to that kind of technology.

this post was submitted on 24 Aug 2023
138 points (92.6% liked)

Canada

7196 readers
497 users here now

What's going on Canada?



Communities


๐Ÿ Meta


๐Ÿ—บ๏ธ Provinces / Territories


๐Ÿ™๏ธ Cities / Local Communities


๐Ÿ’ SportsHockey

Football (NFL)

  • List of All Teams: unknown

Football (CFL)

  • List of All Teams: unknown

Baseball

Basketball

Soccer


๐Ÿ’ป Universities


๐Ÿ’ต Finance / Shopping


๐Ÿ—ฃ๏ธ Politics


๐Ÿ Social and Culture


Rules

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage:

https://lemmy.ca


founded 3 years ago
MODERATORS