1588
you are viewing a single comment's thread
view the rest of the comments
[-] thedeadwalking4242@lemmy.world 60 points 4 days ago

To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit

[-] Cornelius_Wangenheim@lemmy.world 33 points 4 days ago

The human brain has a pattern recognition system. It is not just a pattern recognition system.

[-] lengau@midwest.social 51 points 4 days ago
[-] chuckleslord@lemmy.world 22 points 4 days ago
[-] ByteJunk@lemmy.world 8 points 4 days ago

Management would like to push up this timeline. Can you deliver by end of week?

[-] superkret@feddit.org 3 points 4 days ago

My wife did not react kindly to that request when she was pregnant.

[-] thedeadwalking4242@lemmy.world -1 points 4 days ago

As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.

[-] dustyData@lemmy.world 47 points 4 days ago

The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.

[-] barsoap@lemm.ee 6 points 4 days ago* (last edited 4 days ago)

It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor.

Notably, neither of those two disciplines are computer science. Silicon computers are Turing complete. They can (given enough time and scratch space) compute everything that's computable. The brain cannot be more powerful than that you'd break causality itself: God can't add 1 and 1 and get 3, and neither can god sort a list in less than O(n log n) comparisons. Both being Turing complete also means that they can emulate each other. It's not a metaphor: It's an equivalence. Computer scientists have trouble telling computers and humans apart just as topologists can't distinguish between donuts and coffee mugs.

Architecturally, sure, there's massive difference in hardware. Not carbon vs. silicon but because our brains are nowhere close to being von Neumann machines. That doesn't change anything about brains being computers, though.

There's, big picture, two obstacles to AGI: First, figuring out how the brain does what it does and we know that current AI approaches aren't sufficient,secondly, once understanding that, to create hardware that is even just a fraction as fast and efficient at executing erm itself as the brain is.

Neither of those two involve the question "is it even possible". Of course it is. It's quantum computing you should rather be sceptical about, it's still up in the air whether asymptotic speedups to classical hardware are even physically possible (quantum states might get more fuzzy the more data you throw into a qbit, the universe might have a computational upper limit per unit volume or such).

[-] dustyData@lemmy.world 3 points 4 days ago

Notably, computer science is not neurology. Neither is equipped to meddle in the other's field. If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains. But they are not equivalent. Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing. And Turing completeness is irrelevant. The brain is not a Turing machine. It does not process tokens one at a time. Turing completeness is a technology term, it shares with Turing machines the name alone, as Turing's philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

[-] barsoap@lemm.ee 2 points 4 days ago* (last edited 4 days ago)

If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains.

Does not follow. Different architectures require different specialisations. One is research into something nature presents us, the other (at least the engineering part) is creating something. Completely different fields. And btw the analytical tools neuroscientists have are not exactly stellar, that's why they can't understand microprocessors (the paper is tongue in cheek but also serious).

But they are not equivalent.

They are. If you doubt that, you do not understand computation. You can read up on Turing equivalence yourself.

Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing.

The fuck has "fast" to do with "complex". Also the mechanisms probably aren't terribly complex, how the different parts mesh together to give rise to a synergistic whole creates the complexity. Also I already addressed the distinction between "make things run" and "make them run fast". A dog-slow AGI is still an AGI.

The brain is not a Turing machine. It does not process tokens one at a time.

And neither are microprocessors Turing machines. A thing does not need to be a Turing machine to be Turing complete.

Turing completeness is a technology term

Mathematical would be accurate.

it shares with Turing machines the name alone,

Nope the Turing machine is one example of a Turing complete system. That's more than "shares a name".

Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

You're probably thinking of the Turing test. That doesn't have to do anything with Turing machines, Turing equivalence, or Turing completeness, yes. Indeed, getting the Turing test involved and confused with the other three things is probably the reason why you wrote a whole paragraph of pure nonsense.

[-] TeryVeneno@lemmy.ml 1 points 4 days ago

Yo if you’re ever in the mood, I’d love to talk more about the subject with you. You might be the only person I’ve ever seen to actually talk about this topic the way I understand it.

[-] dustyData@lemmy.world -3 points 4 days ago

Dear lord, I found Elon's Lemmy account.

[-] TeryVeneno@lemmy.ml 3 points 4 days ago

No this guy actually understands what he’s talking about. He may not be articulating it the best, but his argument is not false. What he’s essentially saying is that based on what we understand now, the brain must be a machine in some sense that can do computations.

The only reason this is the case is because logically unless new physics arises this must be the case. So it’s not the brain is a computer like we have now, it’s that all things that process and handle information systematically must do computation. What that looks like and what each unit does it what we don’t get.

[-] barsoap@lemm.ee 0 points 4 days ago

Elon, judging from his twitter takes, understands this stuff even less than you do.

[-] bigpEE@lemmy.world 2 points 4 days ago* (last edited 4 days ago)

Re: quantum computing, we know quantum advantage is real both for certain classes of problems, e.g. theoretically using Grover's, and experimentally for toy problems like bosonic sampling. It's looking like we're past the threshold where we can do error correction, so now it's a question of scaling. I've never heard anyone discuss a limit on computation per volume as applying to QC. We're down to engineering problems, not physics, same as your brain vs computer case.

[-] barsoap@lemm.ee 3 points 4 days ago

From all I know none of the systems that people have built come even close to testing the speedup: Is error correction going to get harder and harder the larger the system is, the more you ask it to compute? It might not be the case but quantum uncertainty is a thing so it's not baseless naysaying, either.

Let me put on my tinfoil hat: Quantum physicists aren't excited to talk about the possibility that the whole thing could be a dead end because that's not how you get to do cool quantum experiments on VC money and it's not like they aren't doing valuable research, it's just that it might be a giant money sink for the VCs which of course is also a net positive. Trying to break the limit might be the only way to test it, and that in turn might actually narrow things down in physics which is itching for experiments which can break the models because we know that they're subtly wrong, just not how, data is needed to narrow things down.

[-] bigpEE@lemmy.world 3 points 3 days ago

We've already done bosonic sampling that's classically intractable. Google published it a few years ago. So yes, quantum supremacy has already been proven. It's a useless toy problem, but one a classical computer just can't do.

Yes, error correction will get harder the more we scale, but we're pretty sure we've reached the point where we win by throwing more qubits at it. Again, now it's engineering the scaling. No mean feat, it'll take a long time, but it's not like this is all speculation or fraud. The theory is sound

[-] Akrenion@slrpnk.net 5 points 4 days ago

Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.

[-] Cethin@lemmy.zip 5 points 4 days ago

I don't get how the ethics of that are questionable. It's not like they're taking brains out of people and using them. It's just cells that are not the same as a human brain. It's like taking skin cells and using those for something. The brain is not just random neurons. It isn't something special and magical.

[-] Akrenion@slrpnk.net 6 points 4 days ago

We haven't yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.

There wouldn't be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.

[-] Cethin@lemmy.zip 1 points 4 days ago

Sure, we don't know what makes us sapient or conscious. It isn't a handful of neurons on a tray though. They're significantly less conscious than your computer is.

[-] Akrenion@slrpnk.net 5 points 4 days ago

Maybe I was unclear. I think ethics play a role in research always. That does not mean I want this to stop. I just think we need regulations. Computer-Brain-Interfaces and large brainoids are more than a handful of neurons on a tray. I wouldn't call them human but we all know how fast science can get.

[-] dustyData@lemmy.world 2 points 4 days ago

Reading about those studies is pretty interesting. Usually the neurons do most of the heavy lifting, adapting to the I/O chip input and output. It's almost an admittance that we don't yet fully understand what we are dealing with, when we try to interface with our rudimentary tech.

[-] fckreddit@lemmy.ml 5 points 4 days ago

The only way AI is going reach human-level intelligence is if we can actually figure out what happens to information in our brains. No one can really tell if and when that is going to happen.

[-] thedeadwalking4242@lemmy.world 1 points 4 days ago

Not necessarily, human made intelligence may use separate methods. The human brain is messy it’s possible more can be done with less.

[-] Lyrl@lemm.ee 4 points 4 days ago

Maybe more with less is possible, but we are currently doing less variety of skill with way, way more energy. From https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/04/learning-brain-make-ai-more-energy-efficient/

It is estimated that a human brain uses roughly 20 Watts to work – that is equivalent to the energy consumption of your computer monitor alone, in sleep mode. On this shoe-string budget, 80–100 billion neurons are capable of performing trillions of operations that would require the power of a small hydroelectric plant if they were done artificially.

[-] thedeadwalking4242@lemmy.world 0 points 4 days ago

Currently but it’s a start and 100 years is a long time. 100 years ago we didn’t even have computers, barely cars, and doctors still didn’t really wash their hands.

[-] Tlaloc_Temporal@lemmy.ca 5 points 4 days ago

I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.

It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren't close to doing yet, but that could change quickly.

There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don't even know about yet.

I'd give it a 50-50 chance for singularity this century, if development isn't stopped for some reason.

[-] WorldsDumbestMan@lemmy.today 4 points 4 days ago* (last edited 4 days ago)

We would have to direct it in specific directions that we don't understand. Think what a freak accident we REALLY are!

EDIT: I would just copy-paste the human brain in some digital form, modify it so that it is effectively immortal inside the simulation, set simulation speed to * 10.000.000, and let it take it's revenge for being imprisoned into an eternal void of suffering.

[-] RedBauble@sh.itjust.works 2 points 4 days ago

Straight out from Pantheon. Actually a part of the plot of the show

[-] Belgdore@lemm.ee 4 points 4 days ago

What does “better” mean in that context?

[-] driving_crooner@lemmy.eco.br 4 points 4 days ago
[-] zephorah@lemm.ee 1 points 4 days ago

I strongly encourage you to at least scratch the surface on human memory data.

[-] Cethin@lemmy.zip 29 points 4 days ago

The LLM systems are pattern recognition without any logic or awareness is the issue. It's pure pattern recognition, so it can easily find some patterns that aren't desired.

[-] Tja@programming.dev 4 points 4 days ago

Said the species that finds Jesus on toast every other week.

[-] prole@lemmy.blahaj.zone 4 points 4 days ago* (last edited 4 days ago)

pattern recognition without any logic or awareness is the issue.

Sounds like American conservatives

this post was submitted on 26 Mar 2025
1588 points (99.7% liked)

Science Memes

13916 readers
1702 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS