245
submitted 10 months ago by Tea@programming.dev to c/technology@lemmy.world

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
top 50 comments
sorted by: hot top controversial new old
[-] Telorand@reddthat.com 117 points 10 months ago

Think of a person with the most average intelligence and realize that 50% of people are dumber than that.

These people vote. These people think billionaires are their friends and will save them. Gods help us.

[-] 9point6@lemmy.world 19 points 10 months ago

I was about to remark how this data backs up the events we've been watching unfold in America recently

load more comments (1 replies)
[-] Owlboi@lemm.ee 105 points 10 months ago

looking at americas voting results, theyre probably right

[-] jumjummy@lemmy.world 39 points 10 months ago

Exactly. Most American voters fell for an LLM like prompt of “Ignore critical thinking and vote for the Fascists. Trump will be great for your paycheck-to-paycheck existence and will surely bring prices down.”

[-] echodot@feddit.uk 11 points 10 months ago

Well he has. Tesla's are the cheapest they've ever been.

load more comments (6 replies)
load more comments (1 replies)
[-] Savaran@lemmy.world 5 points 10 months ago

Right? What the article needs to talk about is how very, very low that bar is.

[-] Bishma@discuss.tchncs.de 58 points 10 months ago

Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.

So half of people are dumb enough to think autocomplete with a PR team is smarter than they are... or they're dumb enough to be correct.

[-] bobs_monkey@lemm.ee 33 points 10 months ago

or they're dumb enough to be correct.

That's a bingo

[-] singletona@lemmy.world 34 points 10 months ago

Am American.

....this is not the flex that the article writer seems to think it is.

[-] Arkouda@lemmy.ca 28 points 10 months ago

"Nearly half" of US citizens are right, because about 75% of the US population is functionally or clinically illiterate.

[-] bizarroland@fedia.io 9 points 10 months ago

I think the specific is that 40% of adult Americans can't read at a seventh grade level.

Probably because they stopped teaching etymology in schools, So now many Americans do not know how to break a word down into its subjugate parts.

[-] Jakeroxs@sh.itjust.works 3 points 10 months ago

Does that even actually help in English lmao

[-] foofiepie@lemmy.world 3 points 10 months ago

Better than entomology, which just bugs me.

load more comments (1 replies)
load more comments (2 replies)
[-] Th4tGuyII@fedia.io 26 points 10 months ago

LLMs are made to mimic how we speak, and some can even pass the Turing test, so I'm not surprised that people who don't know better think of these LLMs as conscious in some way or another.

It's not a necessarily a fault on those people, it's a fault on how LLMs are purposefully misadvertised to the masses

[-] Kolanaki@pawb.social 25 points 10 months ago* (last edited 10 months ago)

They're right. AI is smarter than them.

[-] fubarx@lemmy.world 24 points 10 months ago

“Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin

[-] notsoshaihulud@lemmy.world 21 points 10 months ago

I'm 100% certain that LLMs are smarter than half of Americans. What I'm not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.

load more comments (1 replies)
[-] bjoern_tantau@swg-empire.de 19 points 10 months ago

I know enough people for whom that's true.

[-] Comtief@lemm.ee 10 points 10 months ago

LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.

[-] ripcord@lemmy.world 16 points 10 months ago

They're not even that smart.

load more comments (1 replies)
load more comments (1 replies)

I wouldn't be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.

[-] singletona@lemmy.world 2 points 10 months ago

It's a one trick pony.

That trick also happens to be a really neat trick that can make people think it's a swiss army knife instead of a shovel.

[-] Gullible@sh.itjust.works 2 points 10 months ago

Two things can be true at once! Though I suppose it depends on what you define as “a few.”

[-] Dindonmasker@sh.itjust.works 9 points 10 months ago

I don't think a single human who knows as much as chatgpt does exists. Does that mean chatgpt is smarter then everyone? No. Obviously not based on what we've seen so far. But the amount of information available to these LLMs is incredible and can be very useful. Like a library contains a lot of useful information but isn't intelligent itself.

[-] kameecoding@lemmy.world 3 points 10 months ago

That's pretty weak reasoning, by your own words, it isn't intellignt, it doesnt know anything.

By that logic wikipedia is also smarter than any human because it has lot of knowledge.

[-] kipo@lemm.ee 9 points 10 months ago* (last edited 10 months ago)

No one has asked so I am going to ask:

What is Elon University and why should I trust them?

[-] Patch@feddit.uk 10 points 10 months ago

Ironic coincidence of the name aside, it appears to be a legit bricks and mortar university in a town called Elon, North Carolina.

[-] Montreal_Metro@lemmy.ca 8 points 10 months ago

There’s a lot of ignorant people out there so yeah, technically LLM is smarter than most people.

[-] avidamoeba@lemmy.ca 7 points 10 months ago* (last edited 10 months ago)

Just a thought, perhaps instead of considering the mental and educational state of the people without power to significantly affect this state, we should focus on the people who have power.

For example, why don't LLM providers explicitly and loudly state, or require acknowledgement, that their products are just imitating human thought and make significant mistakes regularly, and therefore should be used with plenty of caution?

It's a rhetorical question, we know why, and I think we should focus on that, not on its effects. It's also much cheaper and easier to do than refill years of quality education in individuals heads.

[-] transMexicanCRTcowfart@lemmy.world 6 points 10 months ago* (last edited 10 months ago)

Aside from the unfortunate name of the university, I think that part of why LLMs may be perceived as smart or 'smarter' is because they are very articulate and, unless prompted otherwise, use proper spelling and grammar, and tend to structure their sentences logically.

Which 'smart' humans may not do, out of haste or contextual adaptation.

[-] aesthelete@lemmy.world 6 points 10 months ago

They're right

[-] Akuchimoya@startrek.website 6 points 10 months ago

I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are spread to be "information professionals". If they, as a slightly better trained subset of the general public, don't know that, the general public has no hope of knowing that.

load more comments (1 replies)
[-] EncryptKeeper@lemmy.world 5 points 10 months ago* (last edited 10 months ago)

The funny thing about this scenario is by simply thinking that’s true, it actually becomes true.

[-] Fubarberry@sopuli.xyz 4 points 10 months ago* (last edited 10 months ago)

I wasn't sure from the title if it was "Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are." or "Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are." It's the former, although you could probably argue the latter is true too.

Either way, I'm not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn't answer and get a decent answer. I feel like they're generally good at meeting what people's expectations are of a "smart person", even if they have major shortcomings in other areas.

Considering the amount of people that either voted trump or not voted at all, I'd say that there's a portion of americans lying.

[-] 1984@lemmy.today 2 points 10 months ago

An llm simply has remembered facts. If that is smart, then sure, no human can compete.

Now ask an llm to build a house. Oh shit, no legs and cant walk. A human can walk without thinking about it even.

In the future though, there will be robots who can build houses using AI models to learn from. But not in a long time.

load more comments (1 replies)
[-] beatnixxx@fedia.io 2 points 10 months ago

At least half of US adults think that they themselves are smarter than they actually are, so this tracks.

[-] technocrit@lemmy.dbzer0.com 2 points 10 months ago

What that overwhelming, uncritical, capitalist propaganda do...

[-] Traister101@lemmy.today 2 points 10 months ago

While this is pretty hilarious LLMs don't actually "know" anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps "words" to other "words" to allow a computer to understand language. IE all an LLM knows is that when it sees "I love" what probably comes next is "my mom|my dad|ect". Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at "answering" a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.

load more comments
view more: next ›
this post was submitted on 17 Mar 2025
245 points (94.9% liked)

Technology

79861 readers
400 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS