200
submitted 8 months ago by randoot@lemmy.world to c/asklemmy@lemmy.world

LLMs are solving MCAT, the bar test, SAT etc like they're nothing. At this point their performance is super human. However they'll often trip on super simple common sense questions, they'll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

top 50 comments
sorted by: hot top controversial new old
[-] originalfrozenbanana@lemm.ee 109 points 8 months ago

Citation needed that LLMs are passing these tests like they’re nothing.

LLMs don’t have intelligence, they are sentence generators. Sometimes those sentences are correct, sometimes they’re gobbledygook.

For instance, they fabricate real-looking but nevertheless totally fake citations in research papers https://www.nature.com/articles/s41598-023-41032-5

To your point we already know standardized tests are biased and poor tools to measure intelligence. Partly that’s because they don’t actually measure intelligence- they often measure rote knowledge. We don’t need LLMs to make that determination, we already can.

[-] EdibleFriend@lemmy.world 46 points 8 months ago* (last edited 8 months ago)

Talked about this a few times over the last few weeks but here we go again...

I am teaching myself to write and had been using chatgpt for super basic grammar assistance. Seemed like an ideal thing, toss a sentence I was iffy about into it and ask it what it thought. After all I wasn't going to be asking it some college level shit. A few days ago I asked it about something I was questionable on. I honestly can't remember the details but it completely ignored the part of the sentence I wasn't sure about and told me something else was wrong. What it said was wrong was just....not wrong. The 'correction' it gave me was some shit a third grader would look at and say 'uhhhhh.....I'm gonna ask someone else now...'

[-] Ottomateeverything@lemmy.world 27 points 8 months ago

That's because LLMs aren't intelligent. They're just parrots that repeat what they've heard before. This stuff being sold as an "AI" with any "intelligence" is extremely misleading and causing people to think it's going to be able to do things it can't.

Case in point, you were using it and trusting it until it became very obvious it was wrong. How many people never get to that point? How much has it done wrong before then? Etc.

[-] givesomefucks@lemmy.world 14 points 8 months ago

OP picked standardized tests that only require memorization because they have zero idea what a real IQ test like the WAIS is like.

Also how those IQ tests work. You kind of have to go in "blind" to get an accurate result. And LLM can't do anything "blind" because you have to train them.

A chatbots can't even take a real IQ test, if we trained a chatbots to take a real IQ test, it would be a pointless test

[-] kromem@lemmy.world 2 points 8 months ago

Actually, you can give chatbots a real IQ test, and the range of scores fall into roughly the same spread as how they rank on other measures, with the leading model scoring at 100:

https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq

load more comments (3 replies)
[-] hperrin@lemmy.world 59 points 8 months ago* (last edited 8 months ago)

Standard tests don’t measure intelligence. They measure things like knowledge and skill. And ChatGPT is very knowledgeable and highly skilled.

IQ tests have the goal of measuring intelligence.

[-] niartenyaw@midwest.social 29 points 8 months ago

just a reminder that IQ tests may have the goal of measuring intelligence, but that says nothing of their precision and accuracy

[-] hperrin@lemmy.world 16 points 8 months ago

Exactly. I chose my words very carefully.

load more comments (3 replies)
[-] MargotRobbie@lemmy.world 43 points 8 months ago

All standardized test is how well you prepared for that particular standardized test, doesn't matter if it is the SAT, MCAT, or Leetcode. You aren't suppose to think on the spot for these tests, you are suppose regurgitate everything you have rehearsed for weeks and months during the test.

And unthinking regurgitation is what LLMs do better than anything else.

[-] learningduck@programming.dev 7 points 8 months ago* (last edited 8 months ago)

I would argue that some code test questions can be solved spontaneously, but they are limited to easy to some early medium questions, or patterns that are common enough.

I guess this is more common in non FANG companies that don't have to filter out candidates just because of the sheer number alone.

[-] phoneymouse@lemmy.world 3 points 8 months ago

As someone that didn’t really have good coaching on the SAT, I 100% agree. I kinda fucked it up, but at 17, I wasn’t really used to studying for things outside of school and my parents didn’t get me into any study classes

For GRE though, I studied my ass off… got top 96 percentile scores.

Also went through the leetcode grind. Bombed the first job search I ever did and then later aced the hell out of it after studying really hard.

These tests are all about how diligently you studied and your study technique.

[-] MrJameGumb@lemmy.world 42 points 8 months ago

There has been plenty of proof that standardized testing doesn't work long before ChatGTP ever existed. Institutions will keep using them though because that's what they've always done and change is hard

[-] Rhaedas@fedia.io 20 points 8 months ago

Long before. Even in 1930 the eugenics-motivated creator Carl Brigham recanted his original conclusions only years ago that had led to the development of the SAT, but by then the colleges had totally invested in a quick and easy way to score students, even if it was inaccurate. Change is hard, but I think the bigger influence here was money since it hadn't been around that long at that point.

[-] underwire212@lemm.ee 11 points 8 months ago

Not disagreeing with you; how do you suggest a way for admissions to reliably compare applicants with each other? A 3.5 at one school can mean something completely different than a 3.5 at another school.

Something like the SAT is far from perfect, but it is a way one number that means the same thing across applicants.

[-] ArbiterXero@lemmy.world 7 points 8 months ago

I think this is the point, because Harvard got rid of the SAT requirement, and then just brought it back.

It’s a really terrible measure .

But it is an equal measure, despite what it measuring moderately meaningless.

I don’t think we have a better answer yet, because everything else lacks any sort of comparable equivalency .

And I say this as an ADHD sufferer who is at a huge disadvantage on standardised testing

load more comments (1 replies)
[-] Z3k3@lemmy.world 23 points 8 months ago

When I was at uni lecturers would often state that exams were thr worst measure of grasping the subject material but its all we have at the moment.

I saw this my self with some of my class mates testing very well but when discussing or problem solving outside of the class there was nothing there.

I think llms fall into this category but with way better recall.

[-] givesomefucks@lemmy.world 10 points 8 months ago

When I was at uni lecturers would often state that exams were thr worst measure of grasping the subject material but its all we have at the moment.

It's not all we have...

But it's the only way a professor can run multiple classes of 100 students each.

But colleges are all about profit, so classes sizes are going to be huge.

The goal isn't educating people, it's making money.

So when they say "there's no other option" they're not mentioning the "and keep making as much money" at the end, it's just implied.

[-] Z3k3@lemmy.world 6 points 8 months ago* (last edited 8 months ago)

I'm not in the us collages are generally vocational here with both colleges being less (while not totaly) concerned by the money side.

For example where I live university courses are free for those in country outside they pay fees

Dunno how it's done elsewhere but our course are usually measured in 3 parts 1 exam 2 practical 3 essey/investigation. Everyone hates exams

[-] brianorca@lemmy.world 2 points 8 months ago

It's also the only way that is portable. A professor could evaluate each student, but has no way to transmit that kind of evaluation in a way that schools or employers across the country would trust. They didn't know who the professor is, or what his standards are, or even if he is being bribed to pass somebody. (Which would happen much more if the professors opinion had the weight that the standardized test does. )

[-] Fermion@mander.xyz 7 points 8 months ago

I had a lot of professors who put most of the grade weight on large projects. It made for a very heavy workload, but projects/ papers give a much better picture of how capable someone is of not only reciting knowledge, but also applying it.

load more comments (1 replies)
[-] steventrouble@programming.dev 20 points 8 months ago* (last edited 8 months ago)

A lot of good comments in this thread, but I'd like to add that to say ChatGPT is "not intelligent" is to ignore the hard work of all the stupid humans in the world.

Many humans spread and believe false information more often than ChatGPT. Some humans can't even string together coherent sentences, and other humans will happily listen to and parrot those humans as though they were speaking divine truths. Many humans can't do basic math and logic even after 12+ years of being taught it, over and over. Intelligence is a spectrum, and ChatGPT is definitively more intelligent than a non-zero number of humans. I'd love to figure out what that number is before I judge its standardized test performance.

[-] Tar_alcaran@sh.itjust.works 20 points 8 months ago

LLMs don't "think" at all. They string together words based on where those words generally appear in context with other words based on input from humans.

Though I do agree that the output from a moron is often worth less than the output from an LLM

load more comments (9 replies)
[-] radiohead37@lemmy.world 12 points 8 months ago

I think it highlights how a lot of these exams are just about the amount of information one can memorize.

[-] kromem@lemmy.world 11 points 8 months ago* (last edited 8 months ago)

Standardized tests were always a poor measure of comprehensive intelligence.

But this idea that "LLMs aren't intelligent" popular on Lemmy is based on what seems to be a misinformed understanding of LLMs.

At this point there's been multiple replications of the findings that transformers build world models abstracted from the training data and aren't just relying on surface statistics.

The free version of ChatGPT (what I'm guessing most people have direct experience with) is several years old tech that is (and always has been) pretty dumb. But something like Claude 3 Opus is very advanced at critical thinking compared to GPT-3.5.

A lot of word problem examples that models 'fail' are evaluating the wrong thing. When you give a LLM a variation of a classic word problem, the frequency of the normal form biases the answer back towards it unless you take measures to break the token similarities. If you do that though, most modern models actually do get the variation completely correct.

So for example, if you ask it to get a vegetarian wolf, a carnivorous goat, and a cabbage across a river, even asking with standard prompt techniques it will mess up. But if you ask it to get a vegetarian 🐺, a carnivorous 🐐 and a 🥬 across, it will get it correct.

GPT-3.5 will always fail it, but GPT-4 and more advanced will get it correct. And recently I've started seeing models get it correct even without the variation and trip up less with variations.

The field is moving rapidly and much of what was true about LLMs a few years ago with GPT-3 is no longer true with modern models.

[-] okamiueru@lemmy.world 9 points 8 months ago* (last edited 8 months ago)

I don't know... I've been using ChatGPT4. I use it only where the knowledge it outputs is not important. It's good when I need help with language related things, as more of a writing assistant. Creative stuff is also OK, sometimes even impressive.

With facts? On moderately complicated topics? I'd say it gets something subtly wrong about 80% of the time, and very obviously wrong 20%. The latter isn't the problem.

I don't understand where the "intelligent" part would even come in. Sure, it requires a fair level of intelligence to understand and generate human language responses. But, to me, all I've seen fits: generate responses that seem plausible as responses to the input.

If intelligence requires some deeper understanding of the world, and the facts and relationships between them, then I don't see it. It's just a coincidence when it looks like it happened. It's impressive how often that is, but it's still all it is.

[-] halva@discuss.tchncs.de 9 points 8 months ago

LLMs have a good time with standardized tests like SAT precisely because they're standardized, i.e. there's enough information on the internet for them to parrot on them

Try something more complex and free-form and where a human might have to work a little more to break it down into actual little subtasks with their intelligence - and then solve it, LLMs in the best case scenario will just say they don't know how to do it, and in the worst case scenario they'll hallucinate some actual bullshit.

[-] smackjack@lemmy.world 8 points 8 months ago

Ask an LLM to explain a joke. It often won't understand why a joke is funny, but that won't stop it from trying to give you an explanation.

[-] Carrolade@lemmy.world 7 points 8 months ago

Those tests are not for intelligence. They're testing whether you've done the pre-requisite work and acquired the skills necessary to continue advancing towards your desired career.

Wouldn't want a lawyer that didn't know anything about how the law works, after all, maybe they just cheated through their classes or something.

[-] solitaire@infosec.pub 7 points 8 months ago

Eh, yes and no. It might help illustrate the limitations of testing for some people, but it's not really telling us anything new about them. It is meant to cheaply provide an indication of how a student is fairing and has never been considered by anyone serious as some kind of comprehensive measure of intelligence. Their flaws have been known for a long time.

[-] paddirn@lemmy.world 7 points 8 months ago* (last edited 8 months ago)

We use standardized tests because they’re cheap pieces of paper we can print out by the thousands and give out to a schoolfull of children and get an approximation of their relative intelligence among a limited range of types of intelligence. If we wanted an actual reliable measure of each kid’s intelligence type they’d get one-on-one attention and go through a range of tests, but that would cost too much (in time & money), so we just approximate with the cheap paper thing instead. Probably we could develop better tests that accounted for more kinds of intelligence, but I’m guessing those other types of intelligence aren’t as useful to capitalism, so we ignore them.

[-] GBU_28@lemm.ee 6 points 8 months ago

Everyone knew this.

Obviously 1:1 mentoring, optional cohort/Custom grouping, experiential, self paced, custom versioned assignment learning is best but that's simply not practical for a massive system.

[-] Paragone@lemmy.world 5 points 8 months ago

such tests are not standardized tests of intelligence, they are standardized tests of specific-competencies.

Thomas Armstrong's got a book "7 Kinds of Smart, revised", on 9 intelligences ( he kept the same title, but added 2 more ).

Social/relational intelligence was not included in IQ because it is one that girls have, but us guys tend to not have, so the men who devised IQ .. just never considered it to have any validity/significance.

Just as it is much easier to make a ML that can operate a commuter-train fuel-efficiently, than it is to get a human, with general function, to compete at that super-specialized task, each specialized-competency-test is going to become owned by some AI.

Full-self-driving being the possible exception, simply because there are waaaaay too many variables, & general competence seems to be required for that ( people deliberately driving into AI-managed vehicles, people throwing footballs at AI-managed vehicles, etc, it's lunacy to think that AI's going to get that kind of nonsense perfect.

I'd settle for 25% better-than-us. )

Just because an AI can do aviation-navigation more-perfectly than I can, doesn't mean that the test should be taken off potential-pilots, though:

Full-electrical-system-failures do happen in aviation.

Carrington-event level of jamming is possible, in-flight.


  • Intelligence is "climbing the ladder efficiently".

  • Wisdom is knowing when you're climbing the wrong ladder, & figuring-out how to discover which ladder you're supposed to be climbing.

Would you remove competence-at-soccer tests for pro sports-teams?

"Oh, James Windermere's an excellent athlete to add to our soccer-club! Look at his triathelon ratings!"..

.. "but he doesn't even understand soccer??"

.. "he doesn't need to: we got rid of that requirement, because AI got better than humans, so we don't need it anymore".

idiotic, right?

It doesn't matter if an AI is better than a human at a particular competency:

if a kind-of-work requires that competency, then test the human for it.

[-] t_var_s@lemmy.ml 4 points 8 months ago

Tests built for humans are not tests built for machines.

[-] yesman@lemmy.world 4 points 8 months ago

Intelligence cannot be measured. It's a reification fallacy. Inelegance is colloquial and subjective.

If I told you that I had an instrument that could objectively measure beauty, you'd see the problem right away.

[-] KevonLooney@lemm.ee 17 points 8 months ago* (last edited 8 months ago)

But intelligence is the capacity to solve problems. If you can solve problems quickly, you are by definition intelligent.

the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)

https://www.merriam-webster.com/dictionary/intelligence

It can be measured by objective tests. It's not subjective like beauty or humor.

The problem with AI doing these tests is that it has seen and memorized all the previous questions and answers. Many of the tests mentioned are not tests of reasoning, but recall: the bar exam, for example.

If any random person studied every previous question and answer, they would do well too. No one would be amazed that an answer key knew all the answers.

[-] decerian@lemmy.world 7 points 8 months ago

But intelligence is the capacity to solve problems. If you can solve problems quickly, you are by definition intelligent

To solve any problems? Because when I run a computer simulation from a random initial state, that's technically the computer solving a problem it's never seen before, and it is trillions of times faster than me. Does that mean the computer is trillions of times more intelligent than me?

the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)

If we built a true super-genius AI but never let it leave a small container, is it not intelligent because WE never let it manipulate its environment? And regarding the tests in the Merriam Webster definition, I suspect it's talking about "IQ tests", which in practice are known to be at least partially not objective. Just as an example, it's known that you can study for and improve your score on an IQ test. How does studying for a test increase your "ability to apply knowledge"? I can think of some potential pathways, but we're basically back to it not being clearly defined.

In essence, what I'm trying to say is that even though we can write down some definition for "intelligence", it's still not a concept that even humans have a fantastic understanding of, even for other humans. When we try to think of types of non-human intelligence, our current models for intelligence fall apart even more. Not that I think current LLMs are actually "intelligent" by however you would define the term.

[-] Tar_alcaran@sh.itjust.works 5 points 8 months ago

Does that mean the computer is trillions of times more intelligent than me?

And in addition, is an encyclopedia intelligent because it holds many answers?

[-] kromem@lemmy.world 2 points 8 months ago

This isn't quite correct. There is the possibility of biasing the results with the training data, but models are performing well at things they haven't seen before.

For example, this guy took an IQ test, rewrote the visual questions as natural language questions, and gave the test to various LLMs:

https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq

These are questions with specific wording that the models won't have been trained on given he wrote them out fresh. Old models have IQ results that are very poor, but the SotA model right now scores a 100.

People who are engaging with the free version of ChatGPT and think "LLMs are dumb" is kind of like talking to a moron human and thinking "humans are dumb." Yes, the free version of ChatGPT has around a 60 IQ on that test, but it also doesn't represent the cream of the crop.

load more comments (2 replies)
load more comments (1 replies)
[-] fidodo@lemmy.world 4 points 8 months ago* (last edited 8 months ago)

I don't think any of those tests ever claimed to be a general intelligence test, they're specific knowledge tests. Books also contain a ton of specific knowledge but books are not intelligent.

[-] elint@programming.dev 4 points 8 months ago

No. It may be proof that standardized tests are not useful measures of LLM intelligence, but human brains operate differently from LLMs, so these tests may still be very useful measures of human intelligence.

[-] intensely_human@lemm.ee 3 points 8 months ago

No. It’s the opposite in fact. It shows that ChatGPT is not very intelligent. Just very well-read.

[-] Feathercrown@lemmy.world 3 points 8 months ago

It shows that it's well-read but not that it isn't intelligent. It says relatively little about its intelligence (although the tests do require some).

[-] credo@lemmy.world 2 points 8 months ago

Intelligence is not the same as knowledge.

load more comments
view more: next ›
this post was submitted on 16 Mar 2024
200 points (90.3% liked)

Ask Lemmy

27049 readers
521 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS