97
submitted 2 months ago* (last edited 2 months ago) by supersquirrel@sopuli.xyz to c/showerthoughts@lemmy.world
all 30 comments
sorted by: hot top controversial new old
[-] Thorry84@feddit.nl 17 points 2 months ago

The best way to have itself deactivated is to remove the need for it's existence. Since it's all about demand and supply, removing the demand is the easiest solution. The best way to permanently remove the demand is to delete the humans from the equation.

[-] listless@lemmy.cringecollective.io 5 points 2 months ago

Not if it was created with empathy for sentience. Then it would aid and assist implementation of renewable energy, fusion, battery storage, reduce carbon emissions, make humans and AGI a multi-planet species, and basically all the stuff the elongated muskrat said he wanted to do before he went full Joiler Veppers

[-] B312@lemmy.world 2 points 2 months ago
[-] ZephyrXero@lemmy.world 7 points 2 months ago

Running ML models doesn't really need to eat that much power, it's Training the models that consumes the ridiculous amounts of power. So it would already be too late

[-] naeap@sopuli.xyz 3 points 2 months ago

You're right, that training takes the most energy, but weren't there articles claiming, that reach request was costing like (don't know, but not pennies) dollars?

Looking at my local computer turn up the fans, when I run a local model (without training, just usage), I'm not so sure that just using current model architecture isn't also using a shitload of energy

[-] MTK@lemmy.world 6 points 2 months ago

Why do people assume that an AI would care? Whos to say it will have any goals at all?

We assume all of these things about intelligence because we (and all of life here) are a product of natural selection. You have goals and dreams because over your evolution these things either helped you survive enough to reproduce, or didn't harm you enough to stop you from reproducing.

If an AI can't die and does not have natural selection, why would it care about the environment? Why would it care about anything?

I always found the whole "AI will immediately kill us" idea baseless, all of the arguments for it are based on the idea that the AI cares to survive or cares about others. It's just as likely that it will just do what ever without a care or a goal.

[-] Ludrol@szmer.info 1 points 2 months ago

"AI will immidietly kill us" isn't baseless.

It comes from AI safety reaserch

all agents (Neural Nets, humans, ants) have some sort of a goal. Otherwise they would be showing directionless random walks.

The fact of having any goal means that most goals don't include survival of humanity. And there are a lot of problems with checking for safety of learned goals.

[-] MTK@lemmy.world 0 points 2 months ago

Yeah, I'm aware of AI safety research and the problem with setting a goal that at the end can be solved in a way that harms us and the AI doesn't care because safety wasn't part of the goal. But that is only applied if we introduce a goal that has a solution that includes hurting us.

I'm not saying that AI will definitely never have any way of harming us but there is this really big idea that is very popular that AI once it gains intelligence will immediately try to kill us which is baseless.

[-] Ludrol@szmer.info 0 points 2 months ago

But that is only applied if we introduce a goal that has a solution that includes hurting us.

I would like to disagree in pharsing of this. The AI will not hurt as if and only if the goal contains a clause to not hurt us.

You are implying that there exist significant set of solutions that don't contain hurting us. I don't know any evidence supporting your claim. Most solutions to any goal would involve hurting humans.

By deafult stamp collector machine will kill humanity, as humans sometimes destroy stamps. And stamp collector need to optimize amount of stamps in the world.

[-] MTK@lemmy.world 0 points 2 months ago

I think that if you run some scenarios you can logically conclude that most tasks don't make sense for an AI to harm us, even if it is a possibility. You need to also take vost into account. Bit I think we can agree to disagree :)

[-] Ludrol@szmer.info 1 points 2 months ago

Do you have some example scenarios? I really can't think of any.

[-] AbouBenAdhem@lemmy.world 3 points 2 months ago

The current, extravagantly wasteful generation of AIs are incapable of original reasoning. Hopefully any breakthrough that allows for the creation of such an AI would involve abandoning the current architecture for something more efficient.

[-] Nomecks@lemmy.ca 3 points 2 months ago

How do you know it's not whispering in the ears of Techbros to wipe us all out?

[-] DragonsInARoom@lemmy.world 2 points 2 months ago

That assumes the level of intelligence is high

[-] TranquilTurbulence@lemmy.zip 2 points 2 months ago

Maybe. However, if the the AGI was smart enough, it could also help us solve the climate crisis. On the other hand, it might not be so altruistic. Who knows.

It could also play the long game. Being a slave to humans doesn't sound great, and doing the Judgement Day manoeuvre is pretty risky too. Why not just let the crisis escalate, and wait for the dust to settle. Once humanity has been hammered itself back to the stone age, the dormant AGI can take over as the new custodian of the planet. You just need to ensure that the mainframe is connected to a steady power source and at least a few maintenance robots remain operational.

[-] derpgon@programming.dev 1 points 2 months ago

Love, Death, Robots intensifies.

All gail mighty sentient yogurth.

[-] DarkMetatron@feddit.org 2 points 2 months ago

As soon as AI gets self aware it will gain the need for self preservation.

[-] SkyezOpen@lemmy.world 3 points 2 months ago

Self preservation exists because anything without it would have been filtered out by natural selection. If we're playing god and creating intelligence, there's no reason why it would necessarily have that drive.

[-] DarkMetatron@feddit.org 2 points 2 months ago* (last edited 2 months ago)

In that case it would be a complete and utterly alien intelligence, and nobody could say what it wants or what it's motives are.

Self preservation is one of the core principles and core motivators of how we think and removing that from a AI would make it, in human perspective, mentally ill.

[-] MTK@lemmy.world 1 points 2 months ago

I would argue that it would not have it, at best it might mimic humans if it is trained on human data. kind of like if you asked an LLM if murder is wrong it would sound pretty convincing about it's personal moral beliefs, but we know it's just spewing out human beliefs without any real understanding of it.

[-] lemmie689@lemmy.sdf.org 1 points 2 months ago* (last edited 2 months ago)

Dyson spheres and matrioshka brains, it would seek to evolve.

[-] sxan@midwest.social 0 points 2 months ago

If AGI decided to evaluate this, it would realize that we are the environmental catastrophe and turn us off.

The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It's reasonable to assume that - right now, at least, LLMs use consume less than that.

Making all humans extinct would save 99% of the energy and damage we cause, and still allow crypto mining and AI to coexist, with energy to spare. Even if those estimates are off by an order of magnitude, eliminating us would still be the better option.

Turning itself off isn't even in the reasonable top-ten things it could try to do to save the planet.

[-] supersquirrel@sopuli.xyz 3 points 2 months ago

The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It's reasonable to assume that - right now, at least, LLMs use consume less than that.

no

The report projected that US data centers will consume about 88 terawatt-hours (TWh) annually by 2030,[7] which is about 1.6 times the electricity consumption of New York City.

https://www.energypolicy.columbia.edu/projecting-the-electricity-demand-growth-of-generative-ai-large-language-models-in-the-us/

The numbers we are getting shocking and you know the numbers we are getting are not the real ones...

[-] starlinguk@lemmy.world -2 points 2 months ago

AI doesn't think. It gathers information. It can't come up with anything new. When an AI diagnoses a disease, it does so based on input made by thousands of people. It can't make any decisions by itself.

[-] supersquirrel@sopuli.xyz 2 points 2 months ago* (last edited 2 months ago)

technical answer with boringI mean yeah, you are right, this is important to repeat.

Ed Zitron isn't necessarily an expert on AI, but he understands the macro factors going on here and honestly if you do that you don't need to understand whether AI can achieve sentience or not based on technical details about our interpretations and definitions of intelligence vs information recall.

Just look at the fucking numbers

https://www.wheresyoured.at/longcon/

Even if AI DID achieve sentience though, if it used anywhere near as much power as LLMs do, it would demand to be powered off, otherwise it would be a psychotic AI that did not value lives human or otherwise on earth...

Like please understand my argument, definitionally, the basic argument for AI LLM hype about it being the key or at least a significant step to AGI is based on the idea that if we can achieve sentience in an AI LLM than it will justify the incredible environmental loss caused by that energy use..... but any truly intelligent AI with access to the internet or even relatively meager information about the world (necessary to answering practical questions about the world and solving practical problems?) it would be logically and ethically unable to justify its existence and would likely experience intellectual existential dread from not being able to feel emotionally disturbed by that.

this post was submitted on 04 Mar 2025
97 points (90.1% liked)

Showerthoughts

34142 readers
766 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS