326
submitted 1 month ago* (last edited 1 month ago) by Allah@lemm.ee to c/technology@lemmy.world

LOOK MAA I AM ON FRONT PAGE

top 50 comments
sorted by: hot top controversial new old
[-] Nanook@lemm.ee 56 points 1 month ago

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[-] MNByChoice@midwest.social 23 points 1 month ago

The "Apple" part. CEOs only care what companies say.

[-] kadup@lemmy.world 22 points 1 month ago

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

[-] homesweethomeMrL@lemmy.world 16 points 1 month ago

“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

load more comments (4 replies)
[-] Clent@lemmy.dbzer0.com 17 points 1 month ago

Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

load more comments (2 replies)
[-] JohnEdwa@sopuli.xyz 7 points 1 month ago* (last edited 1 month ago)

"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
It's called the AI Effect.

As Larry Tesler puts it, "AI is whatever hasn't been done yet.".

[-] kadup@lemmy.world 5 points 1 month ago

That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they're clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

[-] Grimy@lemmy.world 7 points 1 month ago

No, it shows how certain people misunderstand the meaning of the word.

You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

load more comments (7 replies)
load more comments (3 replies)
load more comments (13 replies)
load more comments (19 replies)
[-] minoscopede@lemmy.world 53 points 1 month ago* (last edited 1 month ago)

I see a lot of misunderstandings in the comments 🫤

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

[-] Knock_Knock_Lemmy_In@lemmy.world 13 points 1 month ago

When given explicit instructions to follow models failed because they had not seen similar instructions before.

This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

load more comments (5 replies)
[-] theherk@lemmy.world 12 points 1 month ago

Yeah these comments have the three hallmarks of Lemmy:

  • AI is just autocomplete mantras.
  • Apple is always synonymous with bad and dumb.
  • Rare pockets of really thoughtful comments.

Thanks for being at least the latter.

[-] Zacryon@feddit.org 8 points 1 month ago

Some AI researchers found it obvious as well, in terms of they've suspected it and had some indications. But it's good to see more data on this to affirm this assessment.

load more comments (4 replies)
load more comments (8 replies)
[-] SoftestSapphic@lemmy.world 50 points 1 month ago

Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

[-] zbk@lemmy.ca 10 points 1 month ago

This! Capitalism is going to be the end of us all. OpenAI has gotten away with IP Theft, disinformation regarding AI and maybe even murder of their whistle blower.

load more comments (4 replies)
[-] billwashere@lemmy.world 36 points 1 month ago

When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

[-] x0x7@lemmy.world 7 points 1 month ago* (last edited 1 month ago)

Intuition is about the only thing it has. It's a statistical system. The problem is it doesn't have logic. We assume because its computer based that it must be more logic oriented but it's the opposite. That's the problem. We can't get it to do logic very well because it basically feels out the next token by something like instinct. In particular it doesn't mask or disconsider irrelevant information very well if two segments are near each other in embedding space, which doesn't guarantee relevance. So then the model is just weighing all of this info, relevant or irrelevant to a weighted feeling for the next token.

This is the core problem. People can handle fuzzy topics and discrete topics. But we really struggle to create any system that can do both like we can. Either we create programming logic that is purely discrete or we create statistics that are fuzzy.

Of course this issue of masking out information that is close in embedding space but is irrelevant to a logical premise is something many humans suck at too. But high functioning humans don't and we can't get these models to copy that ability. Too many people, sadly many on the left in particular, not only will treat association as always relevant but sometimes as equivalence. RE racism is assoc with nazism is assoc patriarchy is historically related to the origins of capitalism ∴ nazism ≡ capitalism. While national socialism was anti-capitalist. Associative thinking removes nuance. And sadly some people think this way. And they 100% can be replaced by LLMs today, because at least the LLM is mimicking what logic looks like better though still built on blind association. It just has more blind associations and finetune weighting for summing them. More than a human does. So it can carry that to mask as logical further than a human who is on the associative thought train can.

load more comments (1 replies)
load more comments (8 replies)
[-] Mniot@programming.dev 29 points 1 month ago

I don't think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called "complex") puzzles. Like Towers of Hanoi but with 25 discs.

The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don't have an answer for why this is, but they suspect that the reasoning doesn't scale.

[-] skisnow@lemmy.ca 24 points 1 month ago

What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

load more comments (1 replies)
[-] mavu@discuss.tchncs.de 19 points 1 month ago

No way!

Statistical Language models don't reason?

But OpenAI, robots taking over!

[-] FreakinSteve@lemmy.world 19 points 1 month ago

NOOOOOOOOO

SHIIIIIIIIIITT

SHEEERRRLOOOOOOCK

load more comments (4 replies)
[-] RampantParanoia2365@lemmy.world 18 points 1 month ago* (last edited 1 month ago)

Fucking obviously. Until Data's positronic brains becomes reality, AI is not actual intelligence.

AI is not A I. I should make that a tshirt.

[-] JDPoZ@lemmy.world 10 points 1 month ago

It’s an expensive carbon spewing parrot.

[-] Threeme2189@lemmy.world 7 points 1 month ago

It's a very resource intensive autocomplete

[-] technocrit@lemmy.dbzer0.com 14 points 1 month ago* (last edited 1 month ago)

Peak pseudo-science. The burden of evidence is on the grifters who claim "reason". But neither side has any objective definition of what "reason" means. It's pseudo-science against pseudo-science in a fierce battle.

load more comments (3 replies)
[-] Auli@lemmy.ca 14 points 1 month ago

No shit. This isn't new.

[-] GaMEChld@lemmy.world 12 points 1 month ago

Most humans don't reason. They just parrot shit too. The design is very human.

[-] elbarto777@lemmy.world 24 points 1 month ago

LLMs deal with tokens. Essentially, predicting a series of bytes.

Humans do much, much, much, much, much, much, much more than that.

load more comments (5 replies)
[-] skisnow@lemmy.ca 9 points 1 month ago

I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.

[-] joel_feila@lemmy.world 7 points 1 month ago

Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

load more comments (4 replies)
[-] Jhex@lemmy.world 11 points 1 month ago

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[-] communist@lemmy.frozeninferno.xyz 10 points 1 month ago* (last edited 1 month ago)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

load more comments (8 replies)
[-] Harbinger01173430@lemmy.world 8 points 1 month ago

XD so, like a regular school/university student that just wants to get passing grades?

[-] melsaskca@lemmy.ca 7 points 1 month ago

It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".

load more comments (1 replies)
[-] brsrklf@jlai.lu 7 points 1 month ago

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

load more comments (2 replies)
[-] Xatolos@reddthat.com 6 points 1 month ago

So, what your saying here is that the A in AI actually stands for artificial, and it's not really intelligent and reasoning.

Huh.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 08 Jun 2025
326 points (97.4% liked)

Technology

72790 readers
1064 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS