200
submitted 8 months ago by randoot@lemmy.world to c/asklemmy@lemmy.world

LLMs are solving MCAT, the bar test, SAT etc like they're nothing. At this point their performance is super human. However they'll often trip on super simple common sense questions, they'll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

you are viewing a single comment's thread
view the rest of the comments
[-] steventrouble@programming.dev 20 points 8 months ago* (last edited 8 months ago)

A lot of good comments in this thread, but I'd like to add that to say ChatGPT is "not intelligent" is to ignore the hard work of all the stupid humans in the world.

Many humans spread and believe false information more often than ChatGPT. Some humans can't even string together coherent sentences, and other humans will happily listen to and parrot those humans as though they were speaking divine truths. Many humans can't do basic math and logic even after 12+ years of being taught it, over and over. Intelligence is a spectrum, and ChatGPT is definitively more intelligent than a non-zero number of humans. I'd love to figure out what that number is before I judge its standardized test performance.

[-] Tar_alcaran@sh.itjust.works 20 points 8 months ago

LLMs don't "think" at all. They string together words based on where those words generally appear in context with other words based on input from humans.

Though I do agree that the output from a moron is often worth less than the output from an LLM

[-] steventrouble@programming.dev 1 points 8 months ago* (last edited 8 months ago)

That's a common misunderstanding.

LLMs have billions of neurons, and we can see firsthand how information travels along their neural pathways and, yeah, it looks a whole lot like they're thinking. If anything, we have more concrete proof that LLMs think than that humans think.

They do think, it's just that they don't have short term memory. They can only remember things linguistically, by talking and then listening to their own output. It's an artifact of how we've set them up to interact with the world. Many humans use a similar thought process for certain problems (e.g. talking out loud to a rubber duck). Sure, there are other ways humans think too (e.g. visual/spatial), but linguistic thought is still valid.

[-] Grimy@lemmy.world -1 points 8 months ago* (last edited 8 months ago)

This is kind of how humans operate as well though. We just string words along based on what input is given.

We speak much too fast to be properly reflecting on it, we just regurgitate whatever comes too mind.

To be clear, I'm not saying LLM think but that the difference between our thinking and their output isn't the chasm it's made out to be.

[-] cynar@lemmy.world 11 points 8 months ago

The key difference is that your thinking feeds into your word choice. You also know when to mack up and allow your brain to actually process.

LLMs are (very crudely) a lobotomised speech center. They can chatter and use words, but there is no support structure behind them. The only "knowledge" they have access to is embedded into their training data. Once that is done, they have no ability to "think" about it further. It's a practical example of a "Chinese Room" and many of the same philosophical arguments apply.

I fully agree that this is an important step for a true AI. It's just a fragment however. Just like 4 wheels, and 2 axles don't make a car.

[-] steventrouble@programming.dev 1 points 8 months ago* (last edited 8 months ago)

Apologies if this comes off as rude, but as an engineer involved in reinforcement learning, it's upsetting when people make claims like this based on conjecture and hand-wavey understandings of ML. Some day there will be goal-driven agents that can interact with the world, and those agents will be harmed by those kinds of incorrect understandings of machine learning.

The key difference is that your thinking feeds into your word choice.

LLMs' thinking also feeds into their word choice. Where else would they be getting the words from, thin air? No, it's from billions of neurons doing what neurons do, thinking.

They can chatter and use words, but there is no support structure behind them.

What is a "support structure", in your mind? That's not a defined neuroscience, cog sci, or ML term, so it sounds to me like hand-waving.

The only “knowledge” they have access to is embedded into their training data.

LLMs can and do generalize beyond their training data, it's literally the whole point. Otherwise, they'd be useless.

Once that is done, they have no ability to “think” about it further.

During training, neural weights from previous examples are revisited and recontextualized given the new information. This is what leads to generalization.

It’s a practical example of a “Chinese Room” and many of the same philosophical arguments apply.

The Chinese Room is not a valid argument, because the same logic can be applied to other humans besides yourself.

[-] starman2112@sh.itjust.works 4 points 8 months ago

Disagree. We're very good at using words to convey ideas. There's no reason to believe that we speak much too fast to be properly reflecting on what we say—the speed with which we speak speaks to our proficiency with language, not a lack thereof. Many people do speak without reflecting on what they say, but to reduce all human speech down to that? Downright silly. I frequently spend seconds at a time looking for a word that has the exact meaning that will help to convey the thought that I'm trying to communicate. Yesterday, for example, I spent a whole 15 seconds or so trying to remember the word exacerbate.

An LLM is extremely good at stringing together stock words and phrases that make it sound like it's conveying an idea, but it will never stop to think about the definition of a word that best conveys a real idea. This is the third draft of this comment. I've yet to see an LLM write, rewrite, then rewrite again it's output.

[-] agamemnonymous@sh.itjust.works 3 points 8 months ago* (last edited 8 months ago)

Kinda the same thing though. You spent time finding the right auto-complete in your head. You weighed the words that fit the sentence you'd constructed in order to find the one most frequently encountered in conversations or documents that include specific related words. We're much more sophisticated at this process, but our whole linguistic paradigm isn't fundamentally very different from good auto-complete.

[-] steventrouble@programming.dev 1 points 8 months ago* (last edited 8 months ago)

I’ve yet to see an LLM write, rewrite, then rewrite again it’s output.

It's because we (ML peeps) literally prevent them from deleting their own ouput. It'd be like if we stuck you in a room, and only let you interact with the outside world using a keyboard that has no backspace.

Seriously, try it. Try writing your reply without using the delete button, or backspace, or the arrow keys, or the mouse. See how much better you do than an LLM.

It's hard! To say that an LLM is not capable of thought just because it makes mistakes sometimes is to ignore the immense difficulty of the problem we're asking it to solve.

[-] starman2112@sh.itjust.works 1 points 8 months ago

To me it isn't just the lack of an ability to delete it's own inputs, I mean outputs, it's the fact that they work by little more than pattern recognition. Contrast that with humans, who use pattern recognition as well as an understanding of their own ideas to find the words they want to use.

Man, it is super hard writing without hitting backspace or rewriting anything. Autocorrect helped a ton, but I hate the way this comment looks lmao

This isn't to say that I don't think a neural network can be conscious, or self aware, it's just that I'm unconvinced that they can right now. That is, that they can be. I'm gonna start hitting backspace again after this paragraph

[-] steventrouble@programming.dev 2 points 8 months ago* (last edited 8 months ago)

That was brilliant, thanks for actually giving it a try :D

It's easy for me to get pedantic about minor details, so I should shut up and mention that I see what you mean and agree with the big picture. It's not there yet and may someday be.

Thanks again, stranger! You made my day. Keep on being awesome

this post was submitted on 16 Mar 2024
200 points (90.3% liked)

Ask Lemmy

27062 readers
235 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS