[-] pglpm@lemmy.ca 4 points 1 year ago* (last edited 1 year ago)

I'd like to add one more layer to this great explanation.

Usually, this kind of predictions should be made in two steps:

  1. calculate the conditional probability of the next word (given the data), for all possible candidate words;

  2. choose one word among these candidates.

The choice in step 2. should be determined, in principle, by two factors: (a) the probability of a candidate, and (b) also a cost or gain for making the wrong or right choice if that candidate is chosen. There's a trade-off between these two factors. For example, a candidate might have low probability, but also be a safe choice, in the sense that if it's the wrong choice no big problems arise – so it's the best choice. Or a candidate might have high probability, but terrible consequences if it were the wrong choice – so it's better to discard it in favour of something less likely but also less risky.

This is all common sense! but it's at the foundation of the theory behind this (Decision Theory).

The proper calculation of steps 1. and 2. together, according to fundamental rules (probability calculus & decision theory) would be enormously expensive. So expensive that something like chatGPT would be impossible: we'd have to wait for centuries (just a guess: could be decades or millennia) to train it, and then to get an answer. This is why Large Language Models do two approximations, which obviously can have serious drawbacks:

  • they use extremely simplified cost/gain figures – in fact, from what I gather, the researchers don't have any clear idea of what they are;

  • they directly combine the simplified cost/gain figures with probabilities;

  • They search for the candidate with the highest gain+probability combination, but stopping as soon as they find a relatively high one – at the risk of missing the one that was actually the real maximum.

 

(Sorry if this comment has a lecturing tone – it's not meant to. But I think that the theory behind these algorithms can actually be explained in very common-sense term, without too much technobabble, as @TheChurn's comment showed.)

[-] pglpm@lemmy.ca 4 points 1 year ago

Fantastic, thank you!

[-] pglpm@lemmy.ca 4 points 1 year ago

Math requires insight that a language model cannot posess

Amen to that! Good maths & science teachers have struggled for decades (if not centuries) so that students understand what they're doing and don't simply give answers based on some words or symbols they see in questions [there are also bad teachers who promote this instead]. Because on closer inspection such answers always collapse. And now comes chatGPT that does exactly that instead – and collapses in the same way – and gets glorified.

Amen to what you say on infographic content as well 😂

[-] pglpm@lemmy.ca 4 points 1 year ago

Funny, note that that website uses DRM content. I have DRM disabled on Firefox and when I visit that site I get two DRM warnings.

[-] pglpm@lemmy.ca 3 points 1 year ago

Glorious! 🤣 🤣 🤣

[-] pglpm@lemmy.ca 3 points 1 year ago* (last edited 1 year ago)

"One of the two baths is shown in the picture".

Turns out my house had two baths too, and I never realized.

[-] pglpm@lemmy.ca 4 points 1 year ago

Cheers, didn't know about this possibility!

[-] pglpm@lemmy.ca 4 points 1 year ago

Great community! Sorry must go – peeing myself after seeing the first posts there 🤣

[-] pglpm@lemmy.ca 4 points 1 year ago

Thank you for explaninig what they mean by "base"! But then what's the difference with Kubuntu? In the FAQ they say "as there is vast overlap in the base offerings of both Kubuntu and KDE neon", but what do they mean with "base offerings"?

[-] pglpm@lemmy.ca 3 points 1 year ago

You had me at "final" 😂

view more: ‹ prev next ›

pglpm

joined 1 year ago
MODERATOR OF