I wouldn't pay money for access to AI. The convenience is not worth a single cent to me. But am I the average person? Is the average person sold on this nonsense enough to subscribe to it? The first hit is free to get you hooked. So if the plan is to get the average person dependent on it while it's free and then eventually charge for it, i'm not buying and I wonder how many people will. AI output is fucking garbage.
Two of my friends are paying for it. One works as a developer and one in DevOps. Currently, both of them have a ChatGPT subscription. The first one now shares a lot of Dall-E images picturing his dog and the other one recently showed us proudly how he could tell ChatGPT about our DnD Session so that it generates a summary for us. The latter took nearly forever and had a lot of funny errors in it.
I really don't get why people are paying over 20€/month for this shit.
If you work in tech, it's very useful. It's either get on board or get left behind. I absolutely hate how it's used in a lot of cases. It's really gross to pay attention to imagery on the Internet now. People post AI slop for advertisements and don't realize a few people's faces are bashed in and disfigured. It's disgusting.
I am a software engineer and nearly every time when I used a model for something, it made shit up that didn't work this way or didn't even exist. I always ended up reading documentation and fixing the problem myself.
The only thing where AI is somewhat decent in the context of software development is Code Completion. JetBrains' models are doing an ok job in that regard.
20 bucks a month is basically nothing for a developer who's making $100 an hour.
My employer pays for copilot, and yeah, it makes mistakes, but if you pretend it's a junior developer and double check its code, it can easily save time on a lot of tedious work, and will turn hours of typing into fewer hours of reading.
You have no idea the long term impact such a tool has on a codebase. The more it generates the less you understand, regardless of how much you "check" the output.
I work as a senior dev, and I've tested just about all the foundational models (and many local ones through Ollama) for both professional and personal projects. In 90% of all cases I've tested it has always come back to "if I had just done the work from the beginning myself, I would have had a working result that's cleaner and functions better in less time".
Generated code can work for a few lines, for some boilerplate, or for some refactoring, but anything beyond that is just asking for trouble.
can work for a few lines, for some boilerplate, or for some refactoring
I highly doubt the person you're replying to meant anything else. We're all kinda on the same page here.
I hope so, but your be surprised. I know some devs that basically think LLMs can do their work for them, and treat it as such. They get them to do multi-hundred line edits with a single prompt.
In my experience one needs to be a senior developer with at least some experience with their own code having gone through a full project lifecycle (most importantly, including Support, Maintenance and even Expansion stages) to really, not just intellectually know but even feel in your bones, the massive importance in reducing lifetime maintenance costs of the very kind of practices in making code that LLMs (even with code reviews and fixes) don't clone (even when cloning only "good" code they can't do things like for example consistency, especially at the design level).
- Inexperienced devs just count the time cost of LLM generation and think AI really speeds up coding.
- Somewhat experienced devs count that plus code review costs and think it can sometimes make coding a bit faster
- Very experience devs looks at the inconsistent multiple-style disconnected mess (even after code review) when all those generated snippets get integrated and add the costs of maintaining and expanding that codebase to the rest, concluding that "even in the best case in six months this shit will already have cost me more time in overall even if I refactor it, than it would cost for me doing it properly myself in the first place".
It's very much the same problem with having junior developers do part of the coding, only worse because at least junior devs are consistent and hence predictable in how they fuck up so you know what to look for and once you find it you know to look for more of the same, and you can actually teach junior developers so they get better over time and especially focus on teaching them not to make the worst mistakes they make, whilst LLMs are unteachable and will never get better plus they're mistakes are pretty much randomly distributed in the error space.
You give coding tasks in a controlled way to junior devs whilst handling the impact of their mistakes because you're investing in them, whilst doing the same to an LLM has an higher chance of returning high impact mistakes and yields you no such "investment" returns at all.
but if you pretend it’s a junior developer
Where do these geniuses think they’ll get senior developers from when the current cohort retires? How does someone become a senior developer? Surely not through years of experience as a junior developer under the mentorship of a senior.
This mentality is like burning down an apple tree after one harvest. Fucking idiots, the whole lot of them. I can’t wait for the day all these people wake up and start wandering around confused about why their new talent pool is empty.

I setup a local ollama instance trying to look for ways to integrate it into my regular work. I do IT stuff, from basic helpdesk to office 365 Configs, and almost anything in-between
At best I just use it as a sounding board, basically rubber duck debugging.
I prefer the rubber duck.
Things that used to be free: Google searches, YouTube, Android, Reddit - all have enshittified in different ways (e.g. Reddit is still free of direct monetary charge, but now restrictive rather than "free").
AI is simply following this well-trodden path, or rather people are claiming that is what is happening.
I know a few people who subscribe who I never would have expected to do so, but I also know people who have started asking "why does Google show me an AI summary all the time when I don't need it?" I think any sheen it had is diminishing, slowly but surely.
i have come to a point to ask myself before every search: "could this info be found on Wikipedia?", saved a good chunk of ai slop interaction, for most other things i use ~~cagi.org~~ kagi.org
That link took me to a site with what I think was Chinese on it.
Think you meant kagi with a K dude
My dad never had the patience to write a single python program. Last year (2025) he wrote an entire android app that displays values from some hardware sensor graphically (with a neat animation) in like 1 week with the help of chatgpt. it does help people. it makes mistakes, but so do humans. the question is, is it more productive to do with than without? and i'd say, for some use-cases it's more productive with than without.
In my university, people are paying and saying wonders about it... that's terrifying
They even talk daily about which model is best, just like children discussing which super hero is stronger
My management has fallen in love with it, but is considering dropping because the commercial fees to use copilot haven't seen a return on investment.
Out of the 3 people at my company who pay for chat gpt or grok, the 2 chat gpt users are too reliant on it, while the grok user believes he is talking to a living super intelligence.
you have to remember how dumb the "average" person is, who absolutely thinks that AI chatbots give good answers and doesn't notice or think about the accuracy
All we hear when CEO's, managers, techbros, ai "artists" are still trying to hype genAi . . .

The article the Guardian author used as a primary source (and referenced) is amazing, probably best article I've read in the last year.
Honestly love to see how these companies figure to make a profit in any given future. There are not enough humans with the money or care to pay even a minimal subscription. This is why they're jamming it up our ass. An AI subscription will have to be the next internet or phone bill for this thing to even think about making a profit.
But what about commercial uses? There are plenty, but not enough to make a profit. Companies are already cautiously rolling back subscriptions.
An AI subscription will have to be the next internet or phone bill for this thing to even think about making a profit.
Not really, since even the paying subscribers are costing the companies money. If they were a baker, they're doing the equivalent of selling 1-dollar loaves of bread that cost 2,50 to knead and bake, and that's not even counting the fact that you need to buy flour first.
It's not a 'product' in the conventional sense. It's a gateway to an intellegent astroturf machine. Buy a ton of fake accounts for every social media platform, make them appear 'legit', then have bots comb for anything they can shoehorn a message into and have your ai bot army manipulate public perception. That's the only use case I could see companies actually willing to pay that kinda money for.
What was it again. OpenAI makes now almost a third of running costs in subscriptions?
Looks like consumer subscriptions are 1/3rd of total revenue, which doesn't cover nearly 1/3rd of operating costs. Yikes! Worse than I thought.
They've gone into deep, deep debt, and the pay-off is looking more like vaporware every day. They dun fuckt up bad.
World News
A community for discussing events around the World
Rules:
-
Rule 1: posts have the following requirements:
- Post news articles only
- Video links are NOT articles and will be removed.
- Title must match the article headline
- Not United States Internal News
- Recent (Past 30 Days)
- Screenshots/links to other social media sites (Twitter/X/Facebook/Youtube/reddit, etc.) are explicitly forbidden, as are link shorteners.
-
Rule 2: Do not copy the entire article into your post. The key points in 1-2 paragraphs is allowed (even encouraged!), but large segments of articles posted in the body will result in the post being removed. If you have to stop and think "Is this fair use?", it probably isn't. Archive links, especially the ones created on link submission, are absolutely allowed but those that avoid paywalls are not.
-
Rule 3: Opinions articles, or Articles based on misinformation/propaganda may be removed.
-
Rule 4: Posts or comments that are homophobic, transphobic, racist, sexist, anti-religious, or ableist will be removed. “Ironic” prejudice is just prejudiced.
-
Posts and comments must abide by the lemmy.world terms of service UPDATED AS OF OCTOBER 19 2025
-
Rule 5: Keep it civil. It's OK to say the subject of an article is behaving like a (pejorative, pejorative). It's NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
-
Rule 6: Memes, spam, other low effort posting, reposts, misinformation, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.
-
Rule 7: We didn't USED to need a rule about how many posts one could make in a day, then someone posted NINETEEN articles in a single day. Not comments, FULL ARTICLES. If you're posting more than say, 10 or so, consider going outside and touching grass. We reserve the right to limit over-posting so a single user does not dominate the front page.
We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.
All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.
Lemmy World Partners
News !news@lemmy.world
Politics !politics@lemmy.world
World Politics !globalpolitics@lemmy.world
Recommendations
For Firefox users, there is media bias / propaganda / fact check plugin.
https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/
- Consider including the article’s mediabiasfactcheck.com/ link