220
Save The Planet (lazysoci.al)
(page 2) 50 comments
sorted by: hot top controversial new old
[-] leftthegroup@lemmings.world 1 points 2 weeks ago

Didn't some legislation come out banning making laws against AI? (which I realize is a fucking crazy sentence in the first place- nothing besides rights should just get immunity to all potential new laws)

So the cities aren't even the bad guys here. The Senate is.

[-] MotoAsh@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

It's both. Also don't let the house, supreme court, or the orange buffoon and his cabinet get out of culpability. Checks and balances can work ... when they all aren't bought and paid for by rich fucks.

[-] daveB@sh.itjust.works 1 points 2 weeks ago

Conservation work can be confusing I guess

[-] drmoose@lemmy.world 0 points 2 weeks ago* (last edited 2 weeks ago)

1 prompt is avg 1Wh of electricity -> typical AC runs avg 1,500 W = 2.4 seconds of AC per prompt.

Energy capacity is really not a problem first world countries should face. We have this solved and you're just taking the bait of blaming normal dudes using miniscule amounts of power while billionaires fly private jets for afternoon getaways.

[-] f314@lemmy.world 0 points 2 weeks ago

They are blaming the billionaires (or their companies), for making the thing nobody wanted so they can make money off of it. The guy making a five-breasted woman is a side effect.

And sure, that one image only uses a moderate amount of power. But there still exists giant data centers for only this purpose, gobbling up tons of power and evaporating tons of water for power and cooling. And all this before considering the training of the models (which you better believe they’re doing continuously to try to come up with better ones).

load more comments (5 replies)
[-] jsomae@lemmy.ml 0 points 2 weeks ago* (last edited 2 weeks ago)

I know she's exaggerating but this post yet again underscores how nobody understands that it is training AI which is computationally expensive. Deployment of an AI model is a comparable power draw to running a high-end videogame. How can people hope to fight back against things they don't understand?

[-] PeriodicallyPedantic@lemmy.ca 0 points 2 weeks ago

Right, but that's kind of like saying "I don't kill babies" while you use a product made from murdered baby souls. Yes you weren't the one who did it, but your continued use of it caused the babies too be killed.

There is no ethical consumption under capitalism and all that, but I feel like here is a line were crossing. This fruit is hanging so low it's brushing the grass.

[-] jsomae@lemmy.ml 0 points 2 weeks ago

Are you interpreting my statement as being in favour of training AIs?

[-] PeriodicallyPedantic@lemmy.ca 0 points 2 weeks ago

I'm interpreting your statement as "the damage is done so we might as well use it"
And I'm saying that using it causes them to train more AIs, which causes more damage.

[-] jsomae@lemmy.ml 0 points 2 weeks ago

I agree with your second statement. You have misunderstood me. I am not saying the damage is done so we might as well use it. I am saying people don't understand that it is the training of AIs which is directly power-draining.

I don't understand why you think that my observation people are ignorant about how AIs work is somehow an endorsement that we should use AIs.

[-] PeriodicallyPedantic@lemmy.ca 1 points 2 weeks ago

I guess.

It still smells like an apologist argument to be like "yeah but using it doesn't actually use a lot of power".

I'm actually not really sure I believe that argument either, through. I'm pretty sure that inference is hella expensive. When people talk about training, they don't talk about the cost to train on a single input, they talk about the cost for the entire training. So why are we talking about the cost to infer on a single input?
What's the cost of running training, per hour? What's the cost of inference, per hour, on a similarly sized inference farm, running at maximum capacity?

[-] jsomae@lemmy.ml 0 points 2 weeks ago

Maybe you should stop smelling text and try reading it instead. :P

Running an LLM in deployment can be done locally on one's machine, on a single GPU, and in this case is like playing a video game for under a minute. OpenAI models are larger than by a factor of 10 or more, so it's maybe like playing a video game for 15 minutes (obviously varies based on the response to the query.)

It makes sense to measure deployment usage marginally based on its queries for the same reason it makes sense to measure the environmental impact of a car in terms of hours or miles driven. There's no natural way to do this for training though. You could divide training by the number of queries, to amortize it across its actual usage, which would make it seem significantly cheaper, but it comes with the unintuitive property that this amortization weight goes down as more queries are made, so it's unclear exactly how much of the cost of training should be assigned to a given query. It might make more sense to talk in terms of expected number of total queries during the lifetime deployment of a model.

[-] PeriodicallyPedantic@lemmy.ca 1 points 2 weeks ago

You're way overcomplicating how it could be done. The argument is that training takes more energy:

Typically if you have a single cost associated with a service, then you amortize that cost over the life of the service: so you take the total energy consumption of training and divide it by the total number of user-hours spent doing inference, and compare that to the cost of a single user running inference for an hour (which they can estimate by the number of user-hours in an hour divided by their global inference energy consumption for that hour).

If these are "apples to orange" comparisons, then why do people defending AI usage (and you) keep making the comparison?

But even if it was true that training is significantly more expensive that inference, or that they're inherently incomparable, that doesn't actually change the underlying observation that inference is still quite energy intensive, and the implicit value statement that the energy spent isn't worth the affect on society

[-] jsomae@lemmy.ml 0 points 2 weeks ago

That's a good point. I rescind my argument that training is necessarily more expensive than sum-of-all-deployment.

I still think people overestimate the power draw of AI though, because they're not dividing it by the overall usage of AI. If people started playing high-end video games at the same rate AI is being used, the power usage might be comparable, but it wouldn't mean that an individual playing a video game is suddenly worse for the environment than it was before. However, it doesn't really matter, since ultimately the environmental impact depends only on the total amount of power (and coolant) used, and where that power comes from (could be coal, could be nuclear, could be hydro).

load more comments (4 replies)
[-] FooBarrington@lemmy.world 0 points 2 weeks ago

It's closer to running 8 high-end video games at once. Sure, from a scale perspective it's further removed from training, but it's still fairly expensive.

[-] Jakeroxs@sh.itjust.works 0 points 2 weeks ago

How exactly did you come across this "fact"?

[-] FooBarrington@lemmy.world 0 points 2 weeks ago

I compared the TDP of an average high-end graphics card with the GPUs required to run big LLMs. Do you disagree?

[-] Jakeroxs@sh.itjust.works 0 points 2 weeks ago

I do, because they're not at full load the entire time it's in use

[-] FooBarrington@lemmy.world 0 points 2 weeks ago

They are, it'd be uneconomical not to use them fully the whole time. Look up how batching works.

[-] Jakeroxs@sh.itjust.works 0 points 2 weeks ago* (last edited 2 weeks ago)

I mean I literally run a local LLM, while the model sits in memory it's really not using up a crazy amount of resources, I should hook up something to actually measure exactly how much it's pulling vs just looking at htop/atop and guesstimating based on load TBF.

Vs when I play a game and the fans start blaring and it heats up and you can clearly see the usage increasing across various metrics

[-] FooBarrington@lemmy.world 0 points 2 weeks ago

My guy, we're not talking about just leaving a model loaded, we're talking about actual usage in a cloud setting with far more GPUs and users involved.

[-] Jakeroxs@sh.itjust.works 0 points 2 weeks ago

So you think they're all at full load at all times? Does that seem reasonable to you?

[-] FooBarrington@lemmy.world 0 points 2 weeks ago

Given that cloud providers are desperately trying to get more compute resources, but are limited by chip production - yes, of course? Why do you think they're trying to expand their resources while their existing resources aren't already limited?

[-] Jakeroxs@sh.itjust.works 0 points 2 weeks ago

Because they want the majority of the new chips for training models, not running the existing ones would be my assertion. Two different use cases

[-] FooBarrington@lemmy.world 0 points 2 weeks ago

Sure, and that's why many cloud providers - even ones that don't train their own models - are only slowly onboarding new customers onto bigger models. Sure. Makes total sense.

[-] Jakeroxs@sh.itjust.works 0 points 2 weeks ago

I mean do you actually know or are you just assuming?

[-] FooBarrington@lemmy.world 1 points 2 weeks ago
load more comments (1 replies)
[-] Numenor@lemmy.world 0 points 2 weeks ago
[-] thespcicifcocean@lemmy.world 0 points 2 weeks ago

we had three tiddied aliens in total recall, like 40 years ago. we don't need AI to give us more tits.

[-] joelfromaus@aussie.zone 1 points 2 weeks ago

we don't need … more tits.

Blasphemy!!

load more comments
view more: ‹ prev next ›
this post was submitted on 01 Jul 2025
220 points (98.7% liked)

Microblog Memes

8557 readers
1275 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS