1588
'vegetative electron microscopy'
(i.ibb.co)
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
This is a science community. We use the Dawkins definition of meme.
To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit
The human brain has a pattern recognition system. It is not just a pattern recognition system.
Give it a few billion years.
Realistic timeline
Management would like to push up this timeline. Can you deliver by end of week?
My wife did not react kindly to that request when she was pregnant.
As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.
The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.
Notably, neither of those two disciplines are computer science. Silicon computers are Turing complete. They can (given enough time and scratch space) compute everything that's computable. The brain cannot be more powerful than that you'd break causality itself: God can't add 1 and 1 and get 3, and neither can god sort a list in less than O(n log n) comparisons. Both being Turing complete also means that they can emulate each other. It's not a metaphor: It's an equivalence. Computer scientists have trouble telling computers and humans apart just as topologists can't distinguish between donuts and coffee mugs.
Architecturally, sure, there's massive difference in hardware. Not carbon vs. silicon but because our brains are nowhere close to being von Neumann machines. That doesn't change anything about brains being computers, though.
There's, big picture, two obstacles to AGI: First, figuring out how the brain does what it does and we know that current AI approaches aren't sufficient,secondly, once understanding that, to create hardware that is even just a fraction as fast and efficient at executing erm itself as the brain is.
Neither of those two involve the question "is it even possible". Of course it is. It's quantum computing you should rather be sceptical about, it's still up in the air whether asymptotic speedups to classical hardware are even physically possible (quantum states might get more fuzzy the more data you throw into a qbit, the universe might have a computational upper limit per unit volume or such).
Notably, computer science is not neurology. Neither is equipped to meddle in the other's field. If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains. But they are not equivalent. Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing. And Turing completeness is irrelevant. The brain is not a Turing machine. It does not process tokens one at a time. Turing completeness is a technology term, it shares with Turing machines the name alone, as Turing's philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.
Does not follow. Different architectures require different specialisations. One is research into something nature presents us, the other (at least the engineering part) is creating something. Completely different fields. And btw the analytical tools neuroscientists have are not exactly stellar, that's why they can't understand microprocessors (the paper is tongue in cheek but also serious).
They are. If you doubt that, you do not understand computation. You can read up on Turing equivalence yourself.
The fuck has "fast" to do with "complex". Also the mechanisms probably aren't terribly complex, how the different parts mesh together to give rise to a synergistic whole creates the complexity. Also I already addressed the distinction between "make things run" and "make them run fast". A dog-slow AGI is still an AGI.
And neither are microprocessors Turing machines. A thing does not need to be a Turing machine to be Turing complete.
Mathematical would be accurate.
Nope the Turing machine is one example of a Turing complete system. That's more than "shares a name".
You're probably thinking of the Turing test. That doesn't have to do anything with Turing machines, Turing equivalence, or Turing completeness, yes. Indeed, getting the Turing test involved and confused with the other three things is probably the reason why you wrote a whole paragraph of pure nonsense.
Yo if you’re ever in the mood, I’d love to talk more about the subject with you. You might be the only person I’ve ever seen to actually talk about this topic the way I understand it.
Dear lord, I found Elon's Lemmy account.
No this guy actually understands what he’s talking about. He may not be articulating it the best, but his argument is not false. What he’s essentially saying is that based on what we understand now, the brain must be a machine in some sense that can do computations.
The only reason this is the case is because logically unless new physics arises this must be the case. So it’s not the brain is a computer like we have now, it’s that all things that process and handle information systematically must do computation. What that looks like and what each unit does it what we don’t get.
Elon, judging from his twitter takes, understands this stuff even less than you do.
Re: quantum computing, we know quantum advantage is real both for certain classes of problems, e.g. theoretically using Grover's, and experimentally for toy problems like bosonic sampling. It's looking like we're past the threshold where we can do error correction, so now it's a question of scaling. I've never heard anyone discuss a limit on computation per volume as applying to QC. We're down to engineering problems, not physics, same as your brain vs computer case.
From all I know none of the systems that people have built come even close to testing the speedup: Is error correction going to get harder and harder the larger the system is, the more you ask it to compute? It might not be the case but quantum uncertainty is a thing so it's not baseless naysaying, either.
Let me put on my tinfoil hat: Quantum physicists aren't excited to talk about the possibility that the whole thing could be a dead end because that's not how you get to do cool quantum experiments on VC money and it's not like they aren't doing valuable research, it's just that it might be a giant money sink for the VCs which of course is also a net positive. Trying to break the limit might be the only way to test it, and that in turn might actually narrow things down in physics which is itching for experiments which can break the models because we know that they're subtly wrong, just not how, data is needed to narrow things down.
We've already done bosonic sampling that's classically intractable. Google published it a few years ago. So yes, quantum supremacy has already been proven. It's a useless toy problem, but one a classical computer just can't do.
Yes, error correction will get harder the more we scale, but we're pretty sure we've reached the point where we win by throwing more qubits at it. Again, now it's engineering the scaling. No mean feat, it'll take a long time, but it's not like this is all speculation or fraud. The theory is sound
Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.
I don't get how the ethics of that are questionable. It's not like they're taking brains out of people and using them. It's just cells that are not the same as a human brain. It's like taking skin cells and using those for something. The brain is not just random neurons. It isn't something special and magical.
We haven't yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.
There wouldn't be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.
Sure, we don't know what makes us sapient or conscious. It isn't a handful of neurons on a tray though. They're significantly less conscious than your computer is.
Maybe I was unclear. I think ethics play a role in research always. That does not mean I want this to stop. I just think we need regulations. Computer-Brain-Interfaces and large brainoids are more than a handful of neurons on a tray. I wouldn't call them human but we all know how fast science can get.
Reading about those studies is pretty interesting. Usually the neurons do most of the heavy lifting, adapting to the I/O chip input and output. It's almost an admittance that we don't yet fully understand what we are dealing with, when we try to interface with our rudimentary tech.
The only way AI is going reach human-level intelligence is if we can actually figure out what happens to information in our brains. No one can really tell if and when that is going to happen.
Not necessarily, human made intelligence may use separate methods. The human brain is messy it’s possible more can be done with less.
Maybe more with less is possible, but we are currently doing less variety of skill with way, way more energy. From https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/04/learning-brain-make-ai-more-energy-efficient/
Currently but it’s a start and 100 years is a long time. 100 years ago we didn’t even have computers, barely cars, and doctors still didn’t really wash their hands.
I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.
It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren't close to doing yet, but that could change quickly.
There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don't even know about yet.
I'd give it a 50-50 chance for singularity this century, if development isn't stopped for some reason.
We would have to direct it in specific directions that we don't understand. Think what a freak accident we REALLY are!
EDIT: I would just copy-paste the human brain in some digital form, modify it so that it is effectively immortal inside the simulation, set simulation speed to * 10.000.000, and let it take it's revenge for being imprisoned into an eternal void of suffering.
Straight out from Pantheon. Actually a part of the plot of the show
What does “better” mean in that context?
Dankest memes
I strongly encourage you to at least scratch the surface on human memory data.
The LLM systems are pattern recognition without any logic or awareness is the issue. It's pure pattern recognition, so it can easily find some patterns that aren't desired.
Said the species that finds Jesus on toast every other week.
Sounds like American conservatives