40
An Analysis of DeepMind's 'Language Modeling Is Compression' Paper
(codeconfessions.substack.com)
This is a most excellent place for technology news and articles.
Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.
But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.
This seems likely to me. The common saying is that "you hear what you want to hear", but I think more accurately it's "you remember what has meaning to you". Recently there was a study that even visual memory was tightly integrated with spoken language: https://www.science.org/doi/10.1126/sciadv.adh0064
However, there's a lot of variation in memory among humans. See: The Mind of a Mnemonist.
Yes, that makes much more sense.
No, because our brains also use hierarchical activation for association, which is why if we're talking about bugs and I say "I got a B" you assume its a stinging insect, not a passing grade.
If it was simple word2vec we wouldn't have that additional means of noise suppression.