336
you are viewing a single comment's thread
view the rest of the comments
[-] minorkeys@lemmy.world 115 points 9 hours ago* (last edited 9 hours ago)

The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can't do math or problem solve. It can only tell you what the most likely answer would be. It can't do function things. It's like Family Feud where it says what the most people surveyed said.

[-] Clent@lemmy.dbzer0.com 58 points 8 hours ago

Some of them will "do math" but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What's great is when it outputs results, it's not clear if it engaged the math engine or just guessed.

[-] hikaru755@lemmy.world 10 points 6 hours ago

when it outputs results, it's not clear if it engaged the math engine or just guessed

That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that's directly shown to the user, or if you only see the LLM's final response based on it.

[-] 1D10@lemmy.world 25 points 8 hours ago

I explain it as asking 100 people to Google something and taking the most common answer.

[-] minorkeys@lemmy.world 9 points 8 hours ago

Yeah, that's basically exactly what family feud does.

[-] 1D10@lemmy.world 15 points 7 hours ago

Yep but instead of "name something a woman keeps in her purse" it's "write my legal document" or "is it ok to lick a lamp socket"

[-] felbane@lemmy.world 4 points 1 hour ago

Great question! The answer to all three of your queries is "yes." Would you like me to search for the nearest lamp socket?

[-] Subscript5676@piefed.ca 7 points 7 hours ago

I know Lemmy hates AI with a fiery passion (and I too hate it for various reasons), but the ability to make this sort of prediction in a way far more stable than whatever else came before with natural language processing (fancy term of the day for those who havem't heard of it), and however inefficiently built and ran it is, is useful if you can nudge it enough in a certain direction. It can't do functional things reliably, but if you contain it to only parse human language and extract very specific information, show it in a machine-parsable way, and then use that as input for something you can program, you've essentially built something that feels like it can understand you in human language for a handful of tasks and carry out those tasks (even if the carrying out part isn't actually done by an LLM). So pedantically, it's not AI, but most people not in tech don't know or care about the difference. It's all magic all the way down like how computers should just magically do what they're thinking of. That's not changed.

My point though, and this isn't targeting you specifically dear OC, is that we can circlejerk all we want here, but echoing this oversimplification of what LLMs can do is pretty irrelevant to the bigger discourse. Call these companies out on their practices! Their hypocrisy! Their indifference to the collapse of our biosphere, human suffering, letting the most vulnerable to hang high and dry!

Tech is a tool, and if our best argument is calling a tool useless when it's demonstrably useful in specific ways, we're only making a fool of ourselves, turning people away from us and discouraging others from listening to us.

But if your goal is to feel good by letting one out, please be my guest.

Peace

[-] Susaga@sh.itjust.works 8 points 4 hours ago

The only way to know if LLM output is accurate is to know what an accurate output should look like, and if you know that, you don't need an LLM. If you don't know what an accurate output should look like, an LLM is equally likely to confidently lie to you as it is to help you, making you dumber the more you use it. The only other situation is if you know what an accurate output should look like, but you want an inaccurate one, which is a bad thing to encourage.

"Demonstrably useful" is a lie. It's a blatant and obvious lie. LLMs are so actively detrimental to their users, and society as a whole, that calling them useless is being generous. And even if they were the most beneficial thing on the planet, there is still no reason to use the billionaire's toxic Nazi plagiarism machine.

[-] mycodesucks@lemmy.world 7 points 5 hours ago

We already have tools that can give us incorrect answers in natural human language.

And they post their videos to youtube for free.

this post was submitted on 08 Apr 2026
336 points (97.5% liked)

Programmer Humor

30835 readers
1070 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS