116
submitted 1 week ago* (last edited 1 week ago) by Allah@lemm.ee to c/technology@lemmy.world

LOOK MAA I AM ON FRONT PAGE

top 50 comments
sorted by: hot top controversial new old
[-] Nanook@lemm.ee 23 points 1 week ago

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[-] MNByChoice@midwest.social 8 points 1 week ago

The "Apple" part. CEOs only care what companies say.

[-] kadup@lemmy.world 9 points 1 week ago

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

[-] homesweethomeMrL@lemmy.world 3 points 1 week ago

“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

load more comments (4 replies)
[-] Clent@lemmy.dbzer0.com 7 points 1 week ago

Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

[-] JohnEdwa@sopuli.xyz 4 points 1 week ago* (last edited 1 week ago)

"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
It's called the AI Effect.

As Larry Tesler puts it, "AI is whatever hasn't been done yet.".

[-] kadup@lemmy.world 3 points 1 week ago

That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they're clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

[-] Grimy@lemmy.world 3 points 1 week ago

No, it shows how certain people misunderstand the meaning of the word.

You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

load more comments (4 replies)
[-] cyd@lemmy.world 3 points 1 week ago

By that metric, you can argue Kasparov isn't thinking during chess, either. A lot of human chess "thinking" is recalling memorized openings, evaluating positions many moves deep, and other tasks that map to what a chess engine does. Of course Kasparov is thinking, but then you have to conclude that the AI is thinking too. Thinking isn't a magic process, nor is it tightly coupled to human-like brain processes as we like to think.

load more comments (11 replies)
load more comments (17 replies)
[-] Auli@lemmy.ca 17 points 1 week ago

No shit. This isn't new.

[-] RampantParanoia2365@lemmy.world 15 points 1 week ago

Fucking obviously. Until Data's positronic brains becomes reality, AI is not actual intelligence.

[-] JDPoZ@lemmy.world 6 points 1 week ago

It’s an expensive carbon spewing parrot.

load more comments (1 replies)
[-] FreakinSteve@lemmy.world 14 points 1 week ago

NOOOOOOOOO

SHIIIIIIIIIITT

SHEEERRRLOOOOOOCK

load more comments (2 replies)
[-] GaMEChld@lemmy.world 12 points 1 week ago

Most humans don't reason. They just parrot shit too. The design is very human.

[-] elbarto777@lemmy.world 22 points 1 week ago

LLMs deal with tokens. Essentially, predicting a series of bytes.

Humans do much, much, much, much, much, much, much more than that.

[-] joel_feila@lemmy.world 5 points 1 week ago

Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

[-] skisnow@lemmy.ca 4 points 1 week ago

I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.

load more comments (4 replies)
[-] skisnow@lemmy.ca 10 points 1 week ago

What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

[-] mavu@discuss.tchncs.de 8 points 1 week ago

No way!

Statistical Language models don't reason?

But OpenAI, robots taking over!

[-] Jhex@lemmy.world 6 points 1 week ago

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[-] ZILtoid1991@lemmy.world 5 points 1 week ago

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

[-] TheFriar@lemm.ee 5 points 1 week ago

Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

load more comments (2 replies)
[-] BlaueHeiligenBlume@feddit.org 5 points 1 week ago

Of course, that is obvious to all having basic knowledge of neural networks, no?

load more comments (1 replies)
[-] vala@lemmy.world 4 points 1 week ago
[-] communist@lemmy.frozeninferno.xyz 3 points 1 week ago* (last edited 1 week ago)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

load more comments
view more: next ›
this post was submitted on 08 Jun 2025
116 points (99.2% liked)

Technology

71505 readers
909 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS