I mean... Is that not reasoning, I guess? It's what my brain does-- recognizes patterns and makes split second decisions.
So they have worked out that LLMs do what they were programmed to do in the way that they were programmed? Shocking.
It's not just the memorization of patterns that matters, it's the recall of appropriate patterns on demand. Call it what you will, even if AI is just a better librarian for search work, that's value - that's the new Google.
I use LLMs as advanced search engines. No ads or sponsored results.
There are search engines that do this better. There’s a world out there beyond Google.
Like what?
I don’t think there’s any search engine better than Perplexity. And for scientific research Consensus is miles ahead.
Why would they "prove" something that's completely obvious?
The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
This. Same with the discussion about consciousness. People always claim that AI is not real intelligence, but no one can ever define what real/human intelligence is. It's like people believe in something like a human soul without admitting it.
Humans apply judgment, because they have emotion. LLMs do not possess emotion. Mimicking emotion without ever actually having the capability of experiencing it is sociopathy. An LLM would at best apply patterns like a sociopath.
You've hit the nail on the head.
Personally, I wish that there's more progress in our understanding of human intelligence.
OK, and? A car doesn't run like a horse either, yet they are still very useful.
I'm fine with the distinction between human reasoning and LLM "reasoning".
Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".
But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.
I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.
It has so much data, it might as well be reasoning. As it helped me with my problem.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.