2044
the beautiful code
(programming.dev)
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
And that's what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it's imitating, but with zero understanding of why the original looked that way.
I mean, there's about a billion ways it's been shown to have actual coherent originality at this point, and so it must have understanding of some kind. That's how I know I and other humans have understanding, after all.
What it's not is aligned to care about anything other than making plausible-looking text.
Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.
Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.
And none of these tech companies even pretend that they’ve invented a caring machine that they just haven’t inspired yet. Don’t ascribe further moral and intellectual capabilities to server racks than do the people who advertise them.
You got the "originality" part there, right? I'm talking about tasks that never came close to being in the training data. Would you like me to link some of the research?
Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It's true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.
If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.
I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.
Can you please explain what you’re trying to say here?
Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there's no notion of time, it's not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it's dynamic; they can peak at any time and downstream neurons can begin to fire "early".
They do seem to be equivalent in some way, although AFAIK it's unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.
Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.
In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?
Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.
I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.