For those who know

Now see, I like the idea of AI.
What I don't like are the implications, and the current reality of AI.
I see businesses embracing AI without fully understanding the limits. Stopping the hiring juniors developers, often firing large numbers of seniors because they think AI, a group of cheap post grad vibe programmers and a handful of seasoned seniors will equal the workforce they got rid of when AI, while very good is not ready to sustain this. It is destroying the career progression for the industry and even if/when they realise it was a mistake, it might already have devastated the industry by then.
I see the large tech companies tearing through the web illegally sucking up anything they can access to pull into their ever more costly models with zero regard to the effects on the economy, the cost to the servers they are hitting, or the environment from the huge power draw creating these models requires.
It's a nice idea, but private business cannot be trusted to do this right, we're seeing how to do it wrong, live before our eyes.
And the whole AI industry is holding up the stock market, while AI has historically always ran the hype cycle and crashed into an AI winter. Stock markets do crash after billions pumped into a sector suddenly turn out to be not worth as much. Almost none of these AI companies run a profit and don't have any prospect of becoming profitable. It's when everybody starts yelling that this time it's different that things really become dangerous.
and don't have any prospect of becoming profitable
There's a real twist here in regards to OpenAI.
They have some kind of weird corporate structure where OpenAI is a non-profit and it owns a for-profit arm. But, the deal they have with Softbank is that they have to transition to a for-profit by the end of the year or they lose out on the $40 billion Softbank invested. If they don't manage to do that, Softbank can withhold something like $20B of the $40B which would be catastrophic for OpenAI. Transitioning to a For-Profit is not something that can realistically be done by the end of the year, even if everybody agreed on that transition, and key people don't agree on it.
The whole bubble is going to pop soon, IMO.
Yep, exactly.
They knew the housing/real estate bubble would pop, as it currently is...
... So, they made one final last gambit on AI as the final bubble that would magically become super intelligent and solve literally all problems.
This would never, and is not working, because the underlying tech of LLM has no real actual mechanism by which it would or could develop complex, critical, logical analysis / theoretization / metacognition that isn't just a schizophrenic mania episode.
LLMs are fancy, inefficient autocomplete algos.
Thats it.
They achieve a simulation of knowledge via consensus, not analytic review.
They can never be more intelligent than an average human with access to all the data they've ... mostly illegally stolen.
The entire bet was 'maybe superintelligence will somehow be an emergent property, just give 8t more data and compute power'.
And then they did that, and it didn't work.
I agree with everything you said, but that doesn't mean it can't be very useful in many fields.
I mean, it is objectively bad for life. Throwing away millions to billions of gallons of water all so you can get some dubious coding advice.
The problem isn't AI. The problem is Capitalism.
The problem is always Capitalism.
AI, Climate Change, rising fascism, all our problems are because of capitalism.
Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.
Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.
For labor people don't like doing, sure. I can't imagine replacing a friend of mine with a conversation machine that performs the same or better, though.
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
Ai is literally making people dumber:
And books destroyed everyone's memory. People used to have fantastic memories.
They are a massive privacy risk:
No different than the rest of cloud tech. Run your AI local like your other self hosting.
Are being used to push fascist ideologies into every aspect of the internet:
Hitler used radio to push fascism into every home. It's not the medium, it's the message.
And they are a massive environmental disaster:
AI uses a GPU just like gaming uses a GPU. Building a new AI model uses the same energy that Rockstar spent developing GTA5. But it's easier to point at a centralized data center polluting the environment than thousands of game developers spread across multiple offices creating even more pollution.
Stop being a corporate apologist
Run your own AI! Complaining about "corporate AI" is like complaining about corporate email. Host it yourself.
Do you really need to have a list of why people are sick of LLM and Ai slop?
With the number of times that refrain is regurgitated here ad nauseum, need is an odd way to put it. Sick of it might fit sentiments better. Done with this & not giving a shit is another.
If you ever take a flight for holiday, or even drive long distance and cry about AI being bad for the environment then you're a hypocrite.
Same goes for if you eat beef, or having a really powerful gaming rig that you use a lot.
There are plenty of valid reasons AI is bad, but the argument for the environment seems weak, and most people using it are probably hypocrites. It's barely a drop in the bucket compared to other things
This echo chamber isn't ready for this logical discussion yet unfortunately lol
Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.
This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.
Ai is literally making people dumber: https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
We surveyed 319 knowledge workers who use GenAI tools (e.g., ChatGPT, Copilot) at work at least once per week, to model how they enact critical thinking when using GenAI tools, and how GenAI affects their perceived effort of thinking critically. Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows. To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers.
I would not say "can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving" equals to "literally making people dumber". A sample size of 319 isn't really representative anyways, and they mainly had a sample of a specific type of people. People switch from searching to verifying, which doesn't sound too bad if done correctly. They associate critical thinking with verifying everything ("Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort"), not sure I agree on this.
This study is also only aimed at people working instead of regular use. I personally discovered so many things with GenAI, and know to always question what the model says when it comes to specific topics or questions, because they tend to hallucinate. You could also say internet made people dumber, but those who know how to use it will be smarter.
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They had to write an essay in 20 minutes... obviously most people would just generate the whole thing and fix little problems here and there, but if you have to think less because you're just fixing stuff instead on inventing.. well yea, you use your brain less. Doesn't make you dumb? It's a bit like saying paying by card makes you dumber because you use less of your brain compared to paying in cash because you have to count how much you need to give, and how much you need to get back.
Yes, if you get helped by a tool or someone, it will be less intensive for your brain. Who would have thought?!
Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.

I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone's skull in with it. But no one's putting the hammer in jail.
The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I'm very intrigued by the stuff that's happening in the accessibility field.)
I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren't good enough for "real" work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.
And of course, the social impact was just not what we were ready for. "Move fast and break things" may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.
I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it's marred with controversy for decades to come.
I'm a lot more sick of the word 'slop' than I am of AI. Please, when you criticize AI, form an original thought next time.
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker