58
AI Is A Money Trap (www.wheresyoured.at)

As always with Zitron, grab a beverage before settling in.

top 24 comments
sorted by: hot top controversial new old
[-] orca@orcas.enjoying.yachts 29 points 1 day ago* (last edited 1 day ago)

Like Zitron says in the article, we’re 3 years into the AI era and there is not a single actually profitable company. For comparison, the dot-com bubble was About 5-6 years from start to bust. It’s all smoke and mirrors and sketchy accounting.

Even if/when the AI hype settles and perhaps the tech finds its true (profitable) calling, the tech itself is still insanely expensive to run and train. It’s going to boil down to Microsoft and/or X owning nuclear power plants, and everyone else renting usage from them.

People are making money in AI, but like always, it’s the founders and C-suite, while the staff are kicked to the curb. It’s all a shell game and everyone that has integrated AI into their lives and company workflows, is gonna get the rug pulled out from under them.

[-] t3rmit3@beehaw.org 8 points 1 day ago

and there is not a single actually profitable company

This is a little misleading, because obviously FAANG (and others) are all building AI systems, and are all profitable. There are also tons of companies applying machine learning to various areas that are doing well from a profitability standpoint (mostly B2B SaaS that are enhancing extant tools). This statement is really only true for the glut of "AI companies" that do nothing but produce LLMs to plug into stuff.

My personal take is that this is just revealing how disconnected from the tech industry VCs are, who are the ones buying into this hype and burning billions of dollars on (as you said) smoke and mirrors companies like Anthropic and OpenAI.

[-] megopie@beehaw.org 6 points 17 hours ago* (last edited 17 hours ago)

The thing is, companies like Google, Facebook, Amazon and Microsoft are already profitable, so it could lose them huge amounts of money, with no real meaningful benefit to user retention or B2B sales, but the companies as a whole would still be profitable. It could be a huge money black hole, but they continue to chase it out of unjustified FOMO and in an attempt to keep share prices high through misplaced investor confidence.

Apple’s share price has taken a pretty big hit from the perception that they’re “falling behind” on AI, even if they’ve mostly just backed away from it because users didn’t like it when it was shoved in their face. Other companies are probably looking at that and saying “hey, we’d rather keep the stock market happy and our share prices high rather than stop wasting money on this”.

[-] Feyd@programming.dev 5 points 18 hours ago* (last edited 16 hours ago)

The fang companies that are in on the llm hype are still lighting money on fire in their llm endeavors so I fail to see how the point that they may be otherwise profitable is relevant.

[-] orca@orcas.enjoying.yachts 4 points 18 hours ago

I should reframe what I said: there is not a single profitable AI-focused company. There are tons of already profitable companies that are now deeply embedding AI into everything they do.

[-] Powderhorn@beehaw.org 6 points 1 day ago

This is an interesting take in that only doing one thing but doing it well has been, historically, how businesses thrived. This vertical integration thing and startups looking to be bought out instead of trying to make it on their own (obviously, VCs play a role in this) has led to jacks of all trades.

[-] Powderhorn@beehaw.org 2 points 1 day ago

I don't think it's going to come down to these absurd datacentres. We're only a few years off from platform-agnostic local inference at mass-market prices. Could I get a 5090? Yes. Legally? No.

[-] Feyd@programming.dev 3 points 16 hours ago

We’re only a few years off from platform-agnostic local inference at mass-market prices.

What makes you confident in that? What will change?

[-] Powderhorn@beehaw.org 3 points 14 hours ago

There are already large local models. It's a question of having the hardware, which has historically gotten more powerful with each generation. I don't think it's going to be phones for quite some time, but on desktop, absolutely.

[-] Feyd@programming.dev 2 points 14 hours ago

For business use, laptops without powerful graphics cards have been the norm for quite some time. Do you see businesses deciding to change to desktops to accommodate the power for local models? I think it's pretty optimistic to think that laptops are going to be that powerful in the next 5 years. The advancement in chip capability has dramatically slowed, and to put them in laptops they'd need to be incredibly more power efficient as well.

[-] jarfil@beehaw.org 1 points 6 hours ago* (last edited 6 hours ago)

Keywords: NPU, unified RAM

Apple is doing it, AMD is doing it, phones are doing it.

GPUs with dedicated VRAM are an inefficient way of doing inference. They've been great for research purposes, into what type of NPU may be the best one, but that's been answered already for LLMs. Current step is, achieving mass production.

5 years sounds realistic, unless WW3.

[-] Powderhorn@beehaw.org 3 points 13 hours ago

For the security tradeoff of sensitive data not heading to the cloud for processing? Not all businesses, but many would definitely see value in it. We're also discussing this as though the options are binary ... models could also be hosted on company servers that employees VPN into.

[-] HakFoo@lemmy.sdf.org 5 points 1 day ago

I have to think that most people won't want to do local training.

It's like Gentoo Linux. Yeah, you can compile everything with the exact optimal set of options for your kit, but at huge inefficiency when most use cases might be mostly served by two or three pre built options.

If you're just running pre-made models, plenty of them will run on a 6900XT or whatever.

[-] Powderhorn@beehaw.org 3 points 1 day ago

I don't expect anyone other than ... I don't even know what the current term is ... geeks? batshit billionaires? to be doing training.

I'm very much of the belief that our next big leap in LLMs is local processing. Once my interactions stay on my device, I'll jump in.

this post was submitted on 11 Aug 2025
58 points (96.8% liked)

Technology

39901 readers
367 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS