45

The US dictionary Merriam-Webster’s word of the year for 2025 was “slop”, which it defines as “digital content of low quality that is produced, usually in quantity, by means of artificial intelligence”. The choice underlined the fact that while AI is being widely embraced, not least by corporate bosses keen to cut payroll costs, its downsides are also becoming obvious. In 2026, a reckoning with reality for AI represents a growing economic risk.

Ed Zitron, the foul-mouthed figurehead of AI scepticism, argues pretty convincingly that, as things stand, the “unit economics” of the entire industry – the cost of servicing the requests of a single customer against the price companies are able to charge them – just don’t add up. In typically colourful language, he calls them “dogshit”.

Revenues from AI are rising rapidly as more paying clients sign up but so far not by enough to cover the wild levels of investment under way: $400bn (£297bn) in 2025, with much more forecast in the next 12 months.

Another vehement sceptic, Cory Doctorow, argues: “These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and then lighting it on fire.”

you are viewing a single comment's thread
view the rest of the comments
[-] FudgyMcTubbs@lemmy.world 5 points 3 months ago

I wouldn't pay money for access to AI. The convenience is not worth a single cent to me. But am I the average person? Is the average person sold on this nonsense enough to subscribe to it? The first hit is free to get you hooked. So if the plan is to get the average person dependent on it while it's free and then eventually charge for it, i'm not buying and I wonder how many people will. AI output is fucking garbage.

[-] Kaiserschmarrn@feddit.org 4 points 3 months ago

Two of my friends are paying for it. One works as a developer and one in DevOps. Currently, both of them have a ChatGPT subscription. The first one now shares a lot of Dall-E images picturing his dog and the other one recently showed us proudly how he could tell ChatGPT about our DnD Session so that it generates a summary for us. The latter took nearly forever and had a lot of funny errors in it.

I really don't get why people are paying over 20€/month for this shit.

[-] ScoffingLizard@lemmy.dbzer0.com 1 points 3 months ago

If you work in tech, it's very useful. It's either get on board or get left behind. I absolutely hate how it's used in a lot of cases. It's really gross to pay attention to imagery on the Internet now. People post AI slop for advertisements and don't realize a few people's faces are bashed in and disfigured. It's disgusting.

[-] Kaiserschmarrn@feddit.org 1 points 3 months ago* (last edited 3 months ago)

I am a software engineer and nearly every time when I used a model for something, it made shit up that didn't work this way or didn't even exist. I always ended up reading documentation and fixing the problem myself.

The only thing where AI is somewhat decent in the context of software development is Code Completion. JetBrains' models are doing an ok job in that regard.

[-] prodigalsorcerer@lemmy.ca -1 points 3 months ago

20 bucks a month is basically nothing for a developer who's making $100 an hour.

My employer pays for copilot, and yeah, it makes mistakes, but if you pretend it's a junior developer and double check its code, it can easily save time on a lot of tedious work, and will turn hours of typing into fewer hours of reading.

[-] CeeBee_Eh@lemmy.world 6 points 3 months ago

You have no idea the long term impact such a tool has on a codebase. The more it generates the less you understand, regardless of how much you "check" the output.

I work as a senior dev, and I've tested just about all the foundational models (and many local ones through Ollama) for both professional and personal projects. In 90% of all cases I've tested it has always come back to "if I had just done the work from the beginning myself, I would have had a working result that's cleaner and functions better in less time".

Generated code can work for a few lines, for some boilerplate, or for some refactoring, but anything beyond that is just asking for trouble.

[-] shalafi@lemmy.world 2 points 3 months ago

can work for a few lines, for some boilerplate, or for some refactoring

I highly doubt the person you're replying to meant anything else. We're all kinda on the same page here.

[-] CeeBee_Eh@lemmy.world 1 points 3 months ago

I hope so, but your be surprised. I know some devs that basically think LLMs can do their work for them, and treat it as such. They get them to do multi-hundred line edits with a single prompt.

[-] Aceticon@lemmy.dbzer0.com 1 points 3 months ago* (last edited 3 months ago)

In my experience one needs to be a senior developer with at least some experience with their own code having gone through a full project lifecycle (most importantly, including Support, Maintenance and even Expansion stages) to really, not just intellectually know but even feel in your bones, the massive importance in reducing lifetime maintenance costs of the very kind of practices in making code that LLMs (even with code reviews and fixes) don't clone (even when cloning only "good" code they can't do things like for example consistency, especially at the design level).

  • Inexperienced devs just count the time cost of LLM generation and think AI really speeds up coding.
  • Somewhat experienced devs count that plus code review costs and think it can sometimes make coding a bit faster
  • Very experience devs looks at the inconsistent multiple-style disconnected mess (even after code review) when all those generated snippets get integrated and add the costs of maintaining and expanding that codebase to the rest, concluding that "even in the best case in six months this shit will already have cost me more time in overall even if I refactor it, than it would cost for me doing it properly myself in the first place".

It's very much the same problem with having junior developers do part of the coding, only worse because at least junior devs are consistent and hence predictable in how they fuck up so you know what to look for and once you find it you know to look for more of the same, and you can actually teach junior developers so they get better over time and especially focus on teaching them not to make the worst mistakes they make, whilst LLMs are unteachable and will never get better plus they're mistakes are pretty much randomly distributed in the error space.

You give coding tasks in a controlled way to junior devs whilst handling the impact of their mistakes because you're investing in them, whilst doing the same to an LLM has an higher chance of returning high impact mistakes and yields you no such "investment" returns at all.

[-] ashughes@feddit.uk 1 points 3 months ago

but if you pretend it’s a junior developer

Where do these geniuses think they’ll get senior developers from when the current cohort retires? How does someone become a senior developer? Surely not through years of experience as a junior developer under the mentorship of a senior.

This mentality is like burning down an apple tree after one harvest. Fucking idiots, the whole lot of them. I can’t wait for the day all these people wake up and start wandering around confused about why their new talent pool is empty.

A man in a black jacket and white shirt, a brown coat draped over is arm, looks around confused

[-] SGG@lemmy.world 4 points 3 months ago

I setup a local ollama instance trying to look for ways to integrate it into my regular work. I do IT stuff, from basic helpdesk to office 365 Configs, and almost anything in-between

At best I just use it as a sounding board, basically rubber duck debugging.

I prefer the rubber duck.

[-] OpenStars@piefed.social 2 points 3 months ago

Things that used to be free: Google searches, YouTube, Android, Reddit - all have enshittified in different ways (e.g. Reddit is still free of direct monetary charge, but now restrictive rather than "free").

AI is simply following this well-trodden path, or rather people are claiming that is what is happening.

[-] morto@piefed.social 1 points 3 months ago

In my university, people are paying and saying wonders about it... that's terrifying

They even talk daily about which model is best, just like children discussing which super hero is stronger

[-] WamGams@lemmy.ca 4 points 3 months ago

My management has fallen in love with it, but is considering dropping because the commercial fees to use copilot haven't seen a return on investment.

Out of the 3 people at my company who pay for chat gpt or grok, the 2 chat gpt users are too reliant on it, while the grok user believes he is talking to a living super intelligence.

[-] Piatro@programming.dev 1 points 3 months ago

I know a few people who subscribe who I never would have expected to do so, but I also know people who have started asking "why does Google show me an AI summary all the time when I don't need it?" I think any sheen it had is diminishing, slowly but surely.

[-] expatriado@lemmy.world 2 points 3 months ago* (last edited 3 months ago)

i have come to a point to ask myself before every search: "could this info be found on Wikipedia?", saved a good chunk of ai slop interaction, for most other things i use ~~cagi.org~~ kagi.org

[-] ScoffingLizard@lemmy.dbzer0.com 1 points 3 months ago

That link took me to a site with what I think was Chinese on it.

[-] GeriatricGambino@lemmy.world 1 points 3 months ago

Think you meant kagi with a K dude

[-] gandalf_der_12te@discuss.tchncs.de 1 points 3 months ago

My dad never had the patience to write a single python program. Last year (2025) he wrote an entire android app that displays values from some hardware sensor graphically (with a neat animation) in like 1 week with the help of chatgpt. it does help people. it makes mistakes, but so do humans. the question is, is it more productive to do with than without? and i'd say, for some use-cases it's more productive with than without.

[-] mrgoosmoos@lemmy.ca 1 points 3 months ago

you have to remember how dumb the "average" person is, who absolutely thinks that AI chatbots give good answers and doesn't notice or think about the accuracy

this post was submitted on 04 Jan 2026
45 points (100.0% liked)

World News

55768 readers
137 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS