136

Using supervised fine-tuning (SFT) to introduce even a small amount of relevant data to the training set can often lead to strong improvements in this kind of "out of domain" model performance. But the researchers say that this kind of "patch" for various logical tasks "should not be mistaken for achieving true generalization. ... Relying on SFT to fix every [out of domain] failure is an unsustainable and reactive strategy that fails to address the core issue: the model’s lack of abstract reasoning capability."

Rather than showing the capability for generalized logical inference, these chain-of-thought models are "a sophisticated form of structured pattern matching" that "degrades significantly" when pushed even slightly outside of its training distribution, the researchers write. Further, the ability of these models to generate "fluent nonsense" creates "a false aura of dependability" that does not stand up to a careful audit.

As such, the researchers warn heavily against "equating [chain-of-thought]-style output with human thinking" especially in "high-stakes domains like medicine, finance, or legal analysis." Current tests and benchmarks should prioritize tasks that fall outside of any training set to probe for these kinds of errors, while future models will need to move beyond "surface-level pattern recognition to exhibit deeper inferential competence," they write.

you are viewing a single comment's thread
view the rest of the comments
[-] Catoblepas@piefed.blahaj.zone 11 points 4 days ago

LLMs are incapable of reasoning. There is not a consciousness in there deciding and telling you things. My comment was entirely about whether LLMs can reason, not whether all people reason at the same level or might decide to trick you.

[-] BlameThePeacock@lemmy.ca 2 points 4 days ago

I don't disagree with you that LLMs don't reason. I disagree that all Humans can or do reason.

[-] TehPers@beehaw.org 5 points 4 days ago

I disagree that all Humans can or do reason.

Well if we're talking about all humans...

But more seriously, it doesn't take much looking to find someone who doesn't reason. Just look on the TV during the next major election and you'll find a bunch.

this post was submitted on 11 Aug 2025
136 points (99.3% liked)

Technology

39967 readers
112 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS