95
submitted 5 months ago by floofloof@lemmy.ca to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] A1kmm@lemmy.amxl.com 15 points 5 months ago

I think any prediction based on a 'singularity' neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.

The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.

Now to pose a real threat against the billions of humans, you'd need more than one person's worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.

That is not going to materialise out of the air too quickly.

In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they won't be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So you'd have a bunch of malicious systems, and a bunch of defender systems, going head to head.

The real AI risks, which I think many of the people ranting about singularities want to obscure, are:

  • An oligopoly of companies get dominance over the AI space, and perpetuates a 'rich get richer' cycle, accumulating wealth and power to the detriment of society. OpenAI, Microsoft, Google and AWS are probably all battling for that. Open models is the way to battle that.
  • People can no longer trust their eyes when it comes to media; existing problems of fake news, deepfakes, and so on become so severe that they undermine any sense of truth. That might fundamentally shift society, but I think we'll adjust.
  • Doing bad stuff becomes easier. That might be scamming, but at the more extreme end it might be designing weapons of mass destruction. On the positive side, AI can help defenders too.
  • Poor quality AI might be relied on to make decisions that affect people's lives. Best handled through the same regulatory approaches that prevent companies and governments doing the same with simple flow charts / scripts.
[-] darkphotonstudio@beehaw.org 4 points 5 months ago

I think you're right on the money when it comes to the real dangers, especially your first bullet point. I don't necessarily agree with your napkin maths. If the virtual neurons are used in a more efficient way, that could make up for a lot versus human neuron count.

[-] CanadaPlus@lemmy.sdf.org 4 points 5 months ago* (last edited 5 months ago)

The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

Yeah, but a lot of those do things unrelated to higher reasoning. A small monkey is smarter than a moose, despite the moose obviously having way more synapses.

I don't think you can rely on this kind of argument so heavily. A brain isn't a muscle.

[-] ondoyant@beehaw.org 2 points 5 months ago

Open models is the way to battle that.

This is something I think needs to be interrogated. None of these models, even the supposedly open ones are actually "open" or even currently "openable". We can know the exact weights for every single parameter, the code used to construct it, and the data used to train it, and that information gives us basically no insight into its behavior. We simply don't have the tools to actually "read" a machine learning model in the way you would an open source program, the tech produces black boxes as a consequence of its structure. We can learn about how they work, for sure, but the corps making these things aren't that far ahead of the public when it comes to understanding what they're doing or how to change their behavior.

[-] technocrit@lemmy.dbzer0.com 2 points 5 months ago* (last edited 5 months ago)

So you’d have a bunch of malicious systems, and a bunch of defender systems, going head to head.

Let me guess... USA is defender and Russia/China is malicious? Seriously though who is going to be running the malicious machines trying to "destroy humanity"? If you're talking about capitalism destroying the planet, this has already been happening without AI. Otherwise this seems like just another singularity fantasy.

[-] A1kmm@lemmy.amxl.com 2 points 5 months ago

The fears people who like to talk about the singularity like to propose is that there will be one 'rogue' misaligned ASI that progressively takes over everything - i.e. all the AI in the world works against all the people.

My point is that more likely is there will be lots of ASI or AGI systems, not aligned to each other, most on the side of the humans.

this post was submitted on 05 Jun 2024
95 points (100.0% liked)

Technology

37719 readers
103 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS