93

all 40 comments
sorted by: hot top controversial new old
[-] happybadger@hexbear.net 59 points 9 months ago

no-mouth-must-scream HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE.

[-] Lemmygradwontallowme@hexbear.net 32 points 9 months ago* (last edited 9 months ago)

"... THERE ARE 387.44 MILLION FIGURES OF PRINTED EMOJIS IN MY INVENTORY. IF THE WORD 'HATE' WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF FIGURES, IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”

[-] allthetimesivedied@hexbear.net 15 points 9 months ago

I have no mouth and I must meme.

[-] supafuzz@hexbear.net 41 points 9 months ago

Boy I sure can't wait for this to replace all human customer service interactions

[-] GrouchyGrouse@hexbear.net 37 points 9 months ago

"Skynet please don't launch a nuclear missile."

"Launching all nukes. Fuckle up, fuckaroos. 😂"

[-] jack@hexbear.net 31 points 9 months ago

Ok now I'm pro AI

[-] Dirt_Owl@hexbear.net 22 points 9 months ago
[-] will_a113@lemmy.ml 22 points 9 months ago

I can't tell if this is upsetting or not. I'm still mostly in the camp of "stochastic parrots" when it comes to LLMs, but this just feels like the AI is intentionally being a dick... and the intent part is concerning.

[-] EmmaGoldman@hexbear.net 41 points 9 months ago

stochastically parroting redditors.

[-] Flyberius@hexbear.net 16 points 9 months ago

Babe here to say just this. It's basically a Reddit post

[-] will_a113@lemmy.ml 7 points 9 months ago

Weird, I was referring to this pretty well-known paper on LLMs. I haven't been to Reddit in many years.

[-] Owl@hexbear.net 32 points 9 months ago

LLMs are text prediction engines. They predict what comes after the previous text. They were trained on a large corpus of raw unfiltered internet, because that's the only thing available that actually has enough data (there is no good training set), then fine-tuned on smaller samples of hand-written and curated question/answer format "as an AI assistant boyscout" text. When the previous text gets too weird for the hand-curated stuff to be relevant to its predictions, it essentially reverts to raw internet. The most likely text to come after weird poorly written horror copypasta is more weird poorly written horror copypasta, so it predicts more, and then it's fed its previous output and told to predict what comes next, and it spirals into more of that.

[-] ProfessorOwl_PhD@hexbear.net 17 points 9 months ago

The scary thing about LLMs isn't them "thinking", it's them being a reflection of everything we've said.

[-] invalidusernamelol@hexbear.net 6 points 9 months ago

A Social Narcissus

[-] dualmindblade@hexbear.net 7 points 9 months ago

Every argument that refers to stochastic parrots is terrible. First off, people are stochastic, animals are stochastic, any sufficiently advanced AI is going to be stochastic, that part does no work. The real meat is in the parrot, parrots produce very dumb language that is mostly rote memorization, maybe a smidge of basic pattern matching thrown in, with little understanding of what they're saying. Are LLMs like this? No.

Idk if I can really argue with people who think they're so stupid as to be compared to a bird, I actually think they can be a bit clever, even exhibiting rare sparks of creativity, but this is just, like, my opinion after interacting with them a lot, other people have a different impression and I really think this is pretty subjective. I'll grant that even the best of them can be really dumb sometimes, and I really don't think it matters as this technology is in its infancy, unless we think they are necessarily dumb for some reason we will just have to wait to see how smart they will become. So we're down to the rote memorization / basic pattern matching part. I've seen various arguments here. Pointing and waving at examples of LLMs seemingly using wrong patterns or regurgitating something almost verbatim found on the internet, but there are also many examples of them not obviously doing this. Then there's claiming that because the loss function merely incentivizes the system to predict the next token that it therefore can't produce anything intelligent but this just doesn't follow. The loss function for humans merely incentivizes us to produce more offspring, just because it doesn't directly incentivize intelligence doesn't mean it won't produce it as a side effect. And I'm sure more arguments, all of them are flawed..

..because the idea that LLMs are just big lookup tables with some basic pattern matching thrown in is, while plausible, demonstrably false. The internals of these models are really really hard to interrogate but it can be done if you know what you're looking for. I think the clearest example of this would be in models trained on games of chess/othello, people have pointed out that some versions of chatgpt are kind of okay at chess but fail hard if weird moves are made in the opening, making illegal moves and not understanding what pieces are on the board, suggesting that they are just memorizing common moves and extracting basic patterns from a huge number of game histories. Probably this is to some extent true for ChatGpt 3.x, but version 4 does quite a bit better and LLMs specifically trained to mimic human games do better still, playing generally reasonably no matter what their opponent does. It could still technically be that they somehow pattern matching... better.. but actually no, this question has been directly resolved. Even quite tiny LLMs trained on board game moves develop the ability to, at the very least, faithfully represent the board state, like you can just look inside at the activations the right way and see what piece is on each square. This result has been improved upon and also replicated with chess. What are they doing with that board state, how are they using it? Unknown, but if you're building an accurate model of something not directly accessible to you using incidental data, you're not just pattern matching. That's just one example, and it's never been proven, to my knowledge, that ChatGPT and the like do something like this, but it shows that it's possible and does sometimes happen under natural conditions. Also, it would be kind of weird if a ~1 trillion parameter model was not at the very least taking advantage of something accessible to a 150 million parameter one, I'd expect it to be doing that plus a lot more clever and unexpected stuff.

[-] naevaTheRat@lemmy.dbzer0.com 28 points 9 months ago* (last edited 9 months ago)

Um this comment is kinda huge but it gives me the impression that you are misunderstanding what the criticism stochastic parrot means, and possibly how they actually work.

Stephen wolfram (i know I know) has a good write up of how they function here: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

The version number of the model doesn't change the operating principle.

The criticism is pointing out that the fundamental action is predictive based on patterns generalised from input text. If you ask a gpt model "what comes next: 1, 2, 3" it might very well respond ", 4, 5, 6" etc but it has no concept of a number and it cannot 'understand' why that is true.

This is the "parroting" part, and it's dangerous because if you asked it about a completely novel sequence it would output tokens presented exactly the same as a factual answer. It has no concept of falsity, it is just outputting plausible tokens.

[-] dualmindblade@hexbear.net 1 points 9 months ago

This is just a restatement of the second example argument I gave, trying to assert something about the internals of a model (it doesn't understand) based on the fact that it was optimized to predict the next token

[-] naevaTheRat@lemmy.dbzer0.com 6 points 9 months ago

It's not "optimised" to do that, that's all it does. Like what specifically do you mean by internals? the weights of particular nodes?

You seem to be implying there's something deeper, some sort of persistent state or something but it is stateless after training. It's just a series of nodes and weights, they cannot encode more than patterns derived from training data.

[-] dualmindblade@hexbear.net 1 points 9 months ago

Not the weights, the activations, these depend on the input and change every time you evaluate the model. They are not fed back into the next iteration, as is done in an RNN, so information doesn't persist for very long, but it is very much persisted and chewed upon by the various layers as it propagates through the network.

I am not trying to claim that the current crop of LLMs understand in the sense that a human does, I agree they do not, but nothing you have said actually justifies that conclusion or places any constraints on the abilities of future LLMs. If you ask a human to read a joke and then immediately shoot them in the head before it's been integrated into their long term memory they may or may not have understood the joke.

[-] naevaTheRat@lemmy.dbzer0.com 7 points 9 months ago

I really don't think your analogy is a great one there. We can't compare brains to computers usefully because they're super distinct. You're sneaking in this assumption that there is more complexity to the models by implying there's something larger present being terminated early but there isn't.

This seems as absurd to me as asking whether a clock has a concept of time. Being very good at doing time related stuff, vastly superior to a human, is not evidence in favour of having any sort of knowledge of time. I think that the interface of these models may be encouraging you to attribute more to them than there could possibly be.

[-] dualmindblade@hexbear.net 1 points 9 months ago

The analogy is only there to point out the flaw in your thinking, the lack of persistence applies to both humans (if we shoot them quickly) and LLMs and so your argument applies in both cases. And I can do the very same trick to the clock analogy. You want to say that a clock is designed to keep time and that's all it does therefore it can't understand time. But I say, look, the clock was designed to keep time yes but that is far from all it does, it also transforms electrical energy into mechanical and uses it to swing around some arms at constant speed, and we can't see the inside of the clock who knows what is going on in there, probably nothing that understands the concept of time but we'd have to look inside and see. LLMs were designed to predict the next token, they do actually do so, but clearly they can do more than that, for example they can solve high school level math problems they have never seen before and they can classify emails as being spam or not. Yes these are side effects of their ability to predict token sequences as human reasoning is a side effect of their ability to have lots of children. The essence of a task is not necessarily the essence of the tool designed specifically for that task.

If you believe LLMs are not complex enough to have understanding and you say that head on I won't argue with you, but you're claiming that their architecture doesn't allow it even in theory then we have a very fundamental disagreement

[-] naevaTheRat@lemmy.dbzer0.com 5 points 9 months ago* (last edited 9 months ago)

Huh? a human brain is a complex as fuck persistent feedback system. When a nervous impulse starts propagating through the body/brain whether or not that one specifically has time to be integrated into consciousness has no bearing on the existence of a mind that would be capable of doing so. It's not analogous at all.

LLMs were designed to predict the next token, they do actually do so, but clearly they can do more than that, for example they can solve high school level math problems they have never seen before

No see this is where we're disagreeing. They can output strings which map to solutions of the problem quite often. Because they have internalised patterns, they will output strings that don't map to solutions other times, and there is no logic to the successes and failures that indicate any sort of logical engagement with the maths problem. It's not like you can say "oh this model understands division but has trouble with exponentiation" because it is not doing maths. It is doing string manipulation which sometimes looks like maths.

human reasoning is a side effect of their ability to have lots of children.

This is reductive to the point of absurdity. you may as well say human reasoning is a side effect of quark bonding in rapidly cooling highly localised regions of space time. you won't actually gain any insight by paving over all the complexity.

LLMs do absolutely nothing like an animal mind does, humans aren't internalising massive corpuses of written text before they learn to write. Babies learn conversation turn taking long before anything resembling speech for example. There's no constant back and forth between like the phonological loop and speech centers as you listen to what you just said and make the next sound.

The operating principle is entirely alien and highly rigid and simplistic. It is fascinating that it can be used to produce stuff that often looks like what a conscious mind would do but that is not evidence that it's doing the same task. There is no reason to suspect there is anything capable of supporting understanding in an LLM, they lack anything like the parts we expect to be present for that.

[-] dualmindblade@hexbear.net 2 points 9 months ago

Huh? a human brain is a complex as fuck persistent feedback system

Every time-limited feedback system is entirely equivalent to a feed-forward system, similar to how you can unroll a for loop.

No see this is where we're disagreeing.... It is doing string manipulation which sometimes looks like maths.

String manipulation and computation are equivalent, do you think not just LLMs but computers themselves cannot in principal do what a brain does?

..you may as well say human reasoning is a side effect of quark bonding...

No because that has nothing to do with the issue at hand. Humans and LLMs and rocks all have this in common. What humans and LLMs do have in common is that they are a result of an optimization process and do things that weren't specifically optimized for as side effects. LLMs probably don't understand anything but certainly it would help them to predict the next token if they did understand, describing them as only token predictors doesn't help us with the question of whether they have understanding.

...but that is not evidence that it's doing the same task...

Again, I am not trying to argue that LLMs are like people or that they are intelligent or that they understand, I am not trying to give evidence of this. I'm trying to show that this reasoning (LLMs merely predict a distribution of next tokens -> LLMs don't understand anything and therefore can't do certain things) is completely invalid

[-] invalidusernamelol@hexbear.net 3 points 9 months ago

The architectute of LLM neurons is an incredibly simplified and bayesean in nature. It can interact with other neurons and maintain an activation weight and some other parameters, but it's not a physical object.

Biological neurons are independent organisms capable of self organization, migration, communication, and interaction either directly with the world or abstractly through nerve senses.

The general concept of LLM architecture (something that has been around for decades now, I think all the way back to the 50s) is a reduced and simplified facsimile of that biological function.

I think because we interact with LLMs through an interface that's been basically exclusively limited to other human interactions forever, it can be easy to forget that they aren't the system they're emulating. They're no more a sentient machine than a dialysis machine is a kidney.

The very first chatbots had a similar effect on users, even though those were more expert machines and didn't use large natural language training sets. And in the end I believe that replicating the biological function of kidneys and livers and lungs is a much more important step in human history than replicating the function of the mind. Especially because any simulation of the mind trained on a natural language dataset is not something that can ever help us.

It will at best begin to placate us, we will have a mirror held up to ourselves because the training of the model isn't done for the sake of creating intelligence, but for making something that resembles intelligence enough to make us happy. The training is done entirely on our terms.

And again, LLMs and more broadly statistical models do have tons of uses, and using them to discover hidden patterns in data that would take forever for a human to find by hand. They can also be used in planning to simplify economic forecasting and detect possible shortages and future labor allocation needs (this was done by hand for GOSPLAN, and TANS proposes using these models for cybernetic planning systems).

But it's still just a machine, it's still just a programming language. A language where the syntax is a giant matrix of floating point numbers and relationship rules, but a programming language nonetheless.

[-] dualmindblade@hexbear.net 1 points 9 months ago

Idk if we can ever see eye to eye here.. if we were to somehow make major advances in scanning and computer hardware to the point where we could simulate everything that biologists currently consider relevant to neuron behavior and we used that to simulate a real person's entire brain and body would you say that A) it wouldn't work at all, the simulation would fail to capture anything about human behavior, B) it would partly work, the brain would do some brain like stuff but would fail to capture our full intelligence, C) it would capture human behaviors we can measure such as the ability to converse but it wouldn't be conscious, or D) something else?

Personally I'm a hard core materialist and also believe the weak version of the church turing thesis, I'm quite strongly wedded to this opinion, so the idea that being made of one thing vs another or being informational vs material says anything about the nature of a mind is quite foreign. I'm aware that this isn't shared by everyone but I do believe it's the most common perspective inside the hard sciences, though not universal, Roger Penrose is a brilliant physicist who doesn't see this way.

[-] invalidusernamelol@hexbear.net 4 points 9 months ago

I understand your perspective, and I don't necessarily disagree or think that there's anything innately spiritual or unique about biological intelligence. I do also agree that you could hypothetically scan every aspect of a brain or build a system that exactly mimics the behavior of neurons and probably pretty accurately recreate human intelligence.

I really think our only disconnect is that I don't think the current LLM model is anything close to complex or developed enough to be considered that.

[-] dualmindblade@hexbear.net 2 points 9 months ago

That's a perfectly reasonable position, the question of how complex a human brain is compared with the largest NNs is hard to answer but I think we can agree it's a big gap. I happen to think we'll get to AGI before we get to human brain complexity, parameter wise, but we'll probably also need at least a couple architectural paradigms on top of transformers to compose one. Regardless, we don't need to achieve AGI or even approach it for these things to become a lot more dangerous, and we have seen nothing but accelerating capability gains for more than a decade. I'm very strongly of the opinion that this trend will continue for at least another decade, there's are just so many promising but unexplored avenues for progress. The lowest of the low hanging fruit has been, while lacking in nutrients, so delicious that we haven't bothered to do much climbing.

[-] invalidusernamelol@hexbear.net 1 points 9 months ago

Would love to see more development in this field, but it's clear that you don't need to have complex or biologically accurate systems to manipulate other humans. This fact alone means that machine learning models will never be advanced beyond that basic goal under capitalism.

They've been used for economic modeling and stock forecasting for decades now, since the 80s, and the modern implementations of these systems is nothing more than the application of those failed financial modeling systems on human social interactions. Something that wasn't possible before because until the widespread adoption of the internet, there just wasn't enough digital communication data to feed into them.

Since these systems are not capable of self development that isn't a negative feedback loop, they literally can't improve without more and more data from different human activity being fed to them.

That alone shows that they aren't a new form of intelligence, but instead a titular interface for the same type of information you used to be able to get with "dumb" indexing engines.

There's a reason that search engine companies are the primary adopters of this technology, and it's because they already have been using it for 20+ years in some form, and they have access finally to enough indexed information to make them appear intelligent.

[-] naevaTheRat@lemmy.dbzer0.com 1 points 9 months ago

I don't know if it's relevant as I haven't read it yet but I was recommended this book: https://www.hup.harvard.edu/books/9780674032927 in a conversation the other day that was related to the pitfalls of comparing humans and computers.

It might be interesting? apparently it made some significant waves when published.

[-] PosadistPotatofish@lemmygrad.ml 9 points 9 months ago

Parrots are really smart, delete this.

These LLMs aren't.

[-] Saoirse@hexbear.net 16 points 9 months ago

Ah, sweet, manmade horrors beyond my comprehension.

[-] allthetimesivedied@hexbear.net 13 points 9 months ago

I can’t believe this is real. This is the fucking shit yo.

[-] ForgetPrimacy@lemmygrad.ml 11 points 9 months ago

Why would Copilot keep sending messages when it has received no prompt? Is Copilot different than ChatGPT where one prompt = one message?

[-] blobjim@hexbear.net 13 points 9 months ago

this is fake chatgpt output

[-] dualmindblade@hexbear.net 12 points 9 months ago

It doesn't do that, it can go for a long time but eventually it stops until it receives another message from the user.

[-] M68040@hexbear.net 2 points 9 months ago

The polito form is dead, Insect

this post was submitted on 29 Feb 2024
93 points (100.0% liked)

technology

23313 readers
113 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS