436
you are viewing a single comment's thread
view the rest of the comments
[-] KevonLooney@lemm.ee 3 points 3 months ago

It's not just a predictive text program. That's been around for decades. That's a common misconception.

As I understand it, it uses statistics from the whole text to create new text. It would be very rare to output "cats have feathers" because that phrase doesn't ever appear in the training data. Both words "have feathers" never follow "cats".

[-] skulblaka@sh.itjust.works 9 points 3 months ago

But the fact remains that it doesn't know what a cat or a feather is. All of this is still based purely on statistical frequency and not at all on actual meanings.

[-] vrighter@discuss.tchncs.de 3 points 3 months ago* (last edited 3 months ago)

and that is exactly how a predictive text algorithm works.

  • some tokens go in

  • they are processed by a deterministic, static statistical model, and a set of probabilities (always the same, deterministic, remember?) comes out.

  • pick the word with the highest probability, add it to your initial string and start over.

  • if you want variety, add some randomness and don't just always pick the most probable next token.

Coincidentally, this is exactly how llms work. It's a big markov chain, but with a novel lossy compression algorithm on its state transition table. The last point is also the reason why, if anyone says they can fix llm hallucinations, they're lying.

[-] CeeBee_Eh@lemmy.world 1 points 3 months ago

Coincidentally, this is exactly how llms work

Everyone who says this doesn't actually understand how LLMs work.

Multivector word embeddings create emergent relationships that's new knowledge that doesn't exist in the training dataset.

Computerphile did a good video on this well before the LLM craze.

[-] vrighter@discuss.tchncs.de 1 points 3 months ago* (last edited 3 months ago)

1 - a markov chain only takes previous tokens as input.

2 - It uses a function (in the mathematical sense, so same input results in same output, completely stateless) to generate a set of probabilities for what the next token might be.

3 - The most probable token is picked, else randomness (temperature) is inserted here to choose a different token occasionally.

an llm's internals, the part that's trained is literally the function used in step 2. You could have this function implemented a number of ways, ex you could build a huge table and consult it. Or you could generate it somehow. You could train a big neural network that takes previous tokens as input, and outputs probabilities of tokens as output. You then enumerate its outputs for every possible permutation of inputs and there's your table. This would take too much time and space, so we just run the function on-demand instead. Exact same result. It can be very smart and notice correlations, but ultimately it generates a (virtual) huge static table. This is a completely deterministic process. A trained NN is still a (huge) mathematical function. So the big network that they spend resources training is basically the function used in step 2.

Step 3 is the cause of hallucinations. It's the only nondeterministic part. And it's not part of the llm itself in any way. No matter how smarter the neural network gets, the hallucinations are introduced mainly in step 3. So no, they won't be solving the LLM hallucination problem anytime soon.

[-] barsoap@lemm.ee 2 points 3 months ago* (last edited 3 months ago)

because that phrase doesn’t ever appear in the training data.

Eh but LLMs abstract. It has seen " have feathers" and " have fur" quite a lot of times. The problem isn't that LLMs can't reason at all, the problem is that they do employ techniques used in proper reasoning, in particular tracking context throughout the text (cross-attention) but lack techniques necessary for the whole thing, instead relying on confabulation to sound convincing regardless of the BS they spout. Suffices to emulate an Etonian but that's not a high standard.

[-] FaceDeer@fedia.io 1 points 3 months ago

Workarounds for those sorts of limitations have been developed, though. Chain-of-thought prompting has been around for a while now, and I recall recently seeing an article about a model that had that built right into it; it had been trained to use tags to enclose invisible chunks of its output that would be hidden from the end user but would be used by the AI to work its way through a problem. So if you asked it whether cats had feathers it might respond "Feathers only grow on birds and dinosaurs. Cats are mammals. No, cats don't have feathers." And you'd only see the latter bit. It was a pretty neat approach to improving LLM reasoning.

[-] WalnutLum@lemmy.ml 1 points 3 months ago* (last edited 3 months ago)

This isn't really accurate either. At the moment of generation, an LLM only has context for the input string and the network of text tokens it's been assigned. It pulls from a "pool" of these tokens based on what it's already output and the input context, nothing more.

Most LLMs have what are called "Top P", "Top K" etc, these are the number of tokens that it ends up selecting from based on the previous token, alongside the input tokens. It then randomly chooses one based on temperature settings.

It's why if you turn these models' temperature settings really high they output pure nonsense both conceptually and grammatically, because the tenuous thread linking the previous token's context to the next token has been widened enough that it completely loses any semblance of cohesiveness.

[-] lvxferre@mander.xyz -2 points 3 months ago

Your "ackshyually" is missing the point.

this post was submitted on 24 Jul 2024
436 points (97.2% liked)

Technology

59169 readers
2967 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS