109
top 8 comments
sorted by: hot top controversial new old
[-] minorkeys@lemmy.world 43 points 1 hour ago* (last edited 1 hour ago)

The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can't do math or problem solve. It can only tell you what the most likely answer would be. It can't do function things. It's like Family Feud where it says what the most people surveyed said.

[-] Clent@lemmy.dbzer0.com 14 points 1 hour ago

Some of them will "do math" but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What's great is when it outputs results, it's not clear if it engaged the math engine or just guessed.

[-] 1D10@lemmy.world 6 points 57 minutes ago

I explain it as asking 100 people to Google something and taking the most common answer.

[-] minorkeys@lemmy.world 2 points 40 minutes ago

Yeah, that's basically exactly what family feud does.

[-] Subscript5676@piefed.ca -2 points 33 minutes ago

I know Lemmy hates AI with a fiery passion (and I too hate it for various reasons), but the ability to make this sort of prediction in a way far more stable than whatever else came before with natural language processing (fancy term of the day for those who havem't heard of it), and however inefficiently built and ran it is, is useful if you can nudge it enough in a certain direction. It can't do functional things reliably, but if you contain it to only parse human language and extract very specific information, show it in a machine-parsable way, and then use that as input for something you can program, you've essentially built something that feels like it can understand you in human language for a handful of tasks and carry out those tasks (even if the carrying out part isn't actually done by an LLM). So pedantically, it's not AI, but most people not in tech don't know or care about the difference. It's all magic all the way down like how computers should just magically do what they're thinking of. That's not changed.

My point though, and this isn't targeting you specifically dear OC, is that we can circlejerk all we want here, but echoing this oversimplification of what LLMs can do is pretty irrelevant to the bigger discourse. Call these companies out on their practices! Their hypocrisy! Their indifference to the collapse of our biosphere, human suffering, letting the most vulnerable to hang high and dry!

Tech is a tool, and if our best argument is calling a tool useless when it's demonstrably useful in specific ways, we're only making a fool of ourselves, turning people away from us and discouraging others from listening to us.

But if your goal is to feel good by letting one out, please be my guest.

Peace

[-] Deceptichum@quokk.au 2 points 17 minutes ago

Odd because home assistant can use a local run LLM to do so?

[-] favoredponcho@lemmy.zip 14 points 1 hour ago

Just make Codex write the code for it. Should be easy. Don’t even need humans. Right?

[-] Ganbat@lemmy.dbzer0.com 27 points 2 hours ago

Okay, so, in case the headline is confusing anyone else, it's literal. Like, you know how there are those cringe-ass Alexa ads that are about how it does AI language processing and assistant shit? Yeah, ChatGPT can't I guess.

this post was submitted on 08 Apr 2026
109 points (97.4% liked)

Programmer Humor

30835 readers
617 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS