112

And the real maddening part is that search engines have been so enshitfied to make way for AI that's wrong like 9/10, so you're forced to rely on it for answers because if you try google, the snake wraps around and eats it's own tail giving you an AI answer! stalin-stressed

you are viewing a single comment's thread
view the rest of the comments
[-] Le_Wokisme@hexbear.net 8 points 3 weeks ago

though idk if the actual energy usage etc., is actually saving you time and money without free money existing.

llm end-user energy consumption is pretty low. probably depends on the provider rates and your dev salaries.

[-] neo@hexbear.net 7 points 3 weeks ago

Yeah but inference cannot exist without the prohibitively expensive up-front cost of training. And of course the larger the model the more costly the inference. That's why you read stories like "new trend in SV: pay in tokens." Opus 4.6 is gonna mop the floor with a 2B param model designed to run on an edge PC, but the cost of getting to the point that it can be used, and actually using it, is still very high.

[-] ProletarianDictator@hexbear.net 2 points 3 weeks ago

Inference is not that cheap. It is cheap when compared with training. Try running LLMs on a laptop and watch how quickly your battery is sucked dry. This is still the case when you have a GPU.

[-] Le_Wokisme@hexbear.net 1 points 3 weeks ago

i'm probably using more power to microwave my pasta dinner

this post was submitted on 04 Apr 2026
112 points (99.1% liked)

technology

24344 readers
224 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS