566
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 31 Aug 2023
566 points (98.3% liked)
Technology
59454 readers
1727 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
I think you're getting hung up on the wrong details
First of all, they consist of words AND weights. That's a very important distinction. It's literally the difference between
They don't know what the words mean, but they "know" the shape of the information space, and what shapes are more or less valid.
Now as for databases - databases are basically spreadsheets. They have pieces of information in explicitly shaped groups, and they usually have relationships between them... Ultimately, it's basically text.
Our minds are not at all like a database. Memories are engrams and - they're patterns in neurons that describe a concept. They're a mix between information space and meat space. The engram itself encodes information in a way that allows us to process it, and the shape of it itself links describes the location of other memories. But it's more than that - they're also related by the shape in information space.
You can learn the Eiffel tower is in Paris one day in class, you can see a picture of it, and you can learn it was created for the 1912 world fair. You can visit it. If asked about it a decade later, you probably don't remember the class you learned about it. If you're asked what it's made of, you're going to say metal, even if you never explicitly learned that fact. If you forget it was built for the world fair, but are asked why it was built, you might say it was for a competition or to show off. If you are asked how old it is, you might say a century despite having entirely forgotten the date
Our memories are not at all like a database, you can lose the specifics and keep the concepts, or you can forget what the Eiffel tower is, but remember the words "it was built in 1912 for the world fair".
You can forget a phone number but remember the feeling of typing it on a phone, or forget someone but suddenly remember them when they tell you about their weird hobby. We encode memories like neural networks, but in a far more complicated way - we have different types of memory and we store things differently based on individual, but our knowledge and cognition are entertwined - you can take away personal autobiographical memories from a person, but you can't take away the understanding of what a computer is without destroying their ability to function
Between humans and LLMs, LLMs are the ones closer to databases - they at least remember explicit tokens and the links between them. But they're way more like us than a database - a database stores information or it doesn't, it's accessible or it isn't, it's intact or it's corrupted. But neural networks and humans can remember something but get the specifics wrong, they can fail to remember a fact when asked one way but remember when asked another, and they can entirely fabricate memories or facts based on similar patterns through suggestion
Humans and LLMs encode information in their information processing networks - and it's not even by design. It's an emergent property of a network shaped by the ability to process and create information, aka intelligence (a concept now understood to be different from sentience). We do it very differently, but in similar ways, LLMs just start from tokens and do it in a far less sophisticated way
Everyone here is busy describing the difference between memories and databases to me as if I don't know what it is.
Our memories are not a database. But our memories are like a database in that databases contain information, which our memories do too. Our consciousness is informed by and can consult our memories.
LLMs are not like memories, or a database. They don't contain information. It's literally a mathematical formula; if you put words in one end, words come out the other. The only difference between a statement like "always return the word
Paris
in response to any query" and what LLMs do is complexity, not kind. Whereas I think we can agree humans are something else entirely, right?The fact they use neural networks does not make them similar to human cognition or consciousness or memory. (Separately neural networks, while inspired by biological neural networks, are categorically different from biological neural networks and there are no "emergent properties" in that network that makes it anything other than a sophisticated way of doing math.)
So... yeah, LLMs are nothing like us, unless you believe humans are deterministic machines with no inner thought processes and no consciousness.
Ok, so here's the misunderstanding - neural networks absolutely, 100% store information. You can download alpaca right now, and ask it about Paris, or llamas, or who invented the concept of the neural network. It will give you factual information embedded in the weights, there's nowhere else the information could be.
People probably think you don't understand databases because this seems self apparent that neural networks contain information - if they didn't, where does the information come from?
There's no magic involved, you can prove this mathematically. We know how it works and we can visualize the information - we can point to "this number right here is how the model stores the information of where the Eiffel tower is". It's too complex for us to work with right now, but we understand what's going on
Brains store information the same way, except they're much more complex. Ultimately, the connections between neurons are where the data is stored - there's more layers to it, but it's the same idea
And emergent properties absolutely are a thing in math. No sentience or understanding required, nothing necessarily to do with life or physics at all - complexity is where emergent properties emerge from
You are correct this is a misunderstanding here. But it is of your misunderstanding of neural networks, not mine if memory.
LLMs are mathematical models. It does not know any information about Paris, not in the same way humans do or even the Wikipedia does. It knows what words appear in response to questions about Paris. That is not the same thing as knowing anything about Paris. It does not know what Paris is.
You have apparently been misled into believing a word generation tool contains any information at all other than word weights. Every word it contains is as exactly meaningless to it as every other word.
Brains do not store data in this way. Firstly, neural networks are mathematical approximations of neurons are not neurons and do not have the same properties of neurons, even in aggregate. Secondly, brains contain thoughts, memories, and consciousness. Even if that is representable in a similar vector space as LLM neural networks (a debatable conjecture), the contents of that vector space are a different as newts are from the color purple.
I encourage you to do some more research on this before continuing to discuss it. Ask ChatGPT itself if its neural networks are like human brains; it will tell you categorically no. Just remember it also doesn’t know what it’s talking about. It is reporting word weights from its corpus and is no substitute for actual thought and research.