How can I say this? Enormous power in unskilled and greedy hands only leads to collapse. Well, and AI is like a control tool and not an assistant as you think. I'm not even saying that he kills a living soul and makes life empty and dead. For me personally, he is a serious threat. I advise you not to be too optimistic, we are not in some kind of utopia, you know?
It's a glorified parakeet
"All of your issues with ai go away if capitalism goes away"
Word. Clearly, capitalism drives the world economy, so...
current AI is absolutely not better than sci-fi AI, not by a long shot.
I do think LLMs are interesting and have neat potential for highly specific applications, though I also have many ethical concerns regarding the data it is being trained on. AI is a corporate buzzword designed to attract investment, and nothing more.
What you're calling AI is a mass marketing ponzy scheme. LLMs are not even actual AI. Beyond that issue, its development is in the hands of capital exclusively, and it will only exist to serve capital interests which come at the expense of the lower and working classes by necessity given what corporations (which are essentially unregulated in current climates) are designed to do. What you're calling AI will only be used to hurt human lives and worsen living conditions for all of us (before you nitpick, I think enabling the 0.1% and their hoarding pathology hurts them too). I personally believe you're already aware of that and are cynically trolling, and despite that I'm giving you the honest truth and factual reality of this subject because there is nothing good about being a techno-fetishist sociopath who thinks the answer to humanity's problems is to make humanity itself obsolete, even if it's 'cool'. You clearly got the wrong fucking message from Terminator.
This is why when actual AI emerges I can only hope it'll be in the hands of a public or collective development process and designed with an intent of progression and cooperation in mind.
Oh man you're damn right.
I will preface this with my usual disclaimer on such topics: I do not believe in intellectual property (that is, the likening of thought to physical possessions). I do not think remixing is a sin and I largely agree with the Electronic Frontier Foundation's take that "AI training" may largely be fair use. So, I don't think so-called "generative AI" is inherently evil, however in practice I think it is very often used for evil today.
The most obvious example is, of course, the threat to the work force. "AI" is pitched as a tool that can replace human workers and "wipe out entire categories of human jobs." Ethical issues aside, "AI" as it exists today is not capable of doing what its evangelists sell it as. "AI chat bots" do not know, but they can give off a very convincing impression of knowledge.
"AI" is also used as a tool to pollute the web with worse-than-worthless garbage. At best it is meaningless and at worst it is actively harmful. I would actually say machine generated text is worse than imagery here, because it feels almost impossible to do a web search without running into some LLM generated blog spam.
Creators of "AI" systems use scraper bots to collect data for training. I do not necessarily believe this is evil per se, but again - these bots are not well behaved. They cause real problems for real human users, far beyond "stealing jpegs." There is a sense of Silicon Valley entitlement here - we can do whatever we want and deal with the consequences later, or never.
I have long held that a tool, like any human creation, is imbued with the values and will of its creators, and thus must serve both the creator and the user as its masters (The software freedom movement is largely an attempt at reconciling these interests, by empowering users with the ability to change their tools to do their bidding). In the case of "Generative AI" it is very often the case that both the creators and users of these tools intend them for evil. We often make the mistake of attributing agency to these computer programs, so as to minimize the human element (perhaps, in order to create a "man vs machine" narrative). We speak of "AI" as if it just woke up one day, a la Skynet, in order to steal our jpegs and put us out of work and generate mountains of webslurry. Make no mistake, however - the problems with "AI" are human problems. Humans created these systems in order for other humans to use, in order to inflict harm to other humans. "AI slop" was created specifically for an environment in which human-generated slop already ran amok, because the web as it existed then (as it exists today) rewards the generation of slop.
Oh, I'm afraid this is just the beginning. It will only get worse, because as you know, we live in the last stage of capitalism. And that means maximizing profits at any cost. At first I still hoped that everything would not be so bad, but 2023-2024 opened my eyes and I realized that AI is more of a threat than a useful tool.
There was a lawyer recently who used a chatbot to lodge a motion in court. He got all sorts of case law cases from it. The problem? None of the cases were real.
ignoring the hate-brigade, lemmy users are probably a bit more tech savvy on average.
and i think many people who know how "AI" works under the hood are frustrated because, unlike most of it's loud proponents, they have real-world understanding what it actually is.
and they're tired of being told they "don't get it", by people who actually don't get it. but instead they're the ones being drowned out by the hype train.
and the thing fueling the hype train are dishonest greedy people, eager to over-extend the grift at the expense of responsible and well engineered “AI”.
but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.
lemmy users are probably a bit more tech savvy on average.
Second this.
but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.
Leaving public interests (data and everything around data) to the hands of top 1% is a recipe for disaster.
Its not smart. Its a theft engine that averages information and confidently speaks hallucinations insisting its fact. AI sucks. It wont ever be AGI because it doesn't "think", it runs models and averages. Its autocomplete at huge scale. It burns the earth and produces absolute garbage.
The only LLMs doing anything good are because averaging large data models was good for a specific case, like looking at millions of cancer images and looking at averages.
This does not work for deterministic answers. The "AI" we have now is corporate bullshit they are desperate to have make money and is a massive investor hype machine.
Stop believing the shitty CEOs.
Valid question on a community for questions. Tons of legitimate responses from people mostly hyped for the opportunity to shed light on why they think AI is bad. Which seems to be what OP wanted to figure out. Currently negative 25 for the votes on the post. Seems off.
OK remember like 70 years ago when they started saying we were burning the planet? And then like 50 years ago they were like "no guys we're really burning the planet"? And then 30 years ago they were like "seriously we're close to our last chance to not burn the planet"? and then in the past few years they've been like "the planet is currently burning, species are going extinct, and we are beginning to experience permanent effects that might not snowball into an extinction event if we act right now?"
But sure, AI is really cool and can trick you, personally into thinking it's conscious. It's just using nearly as much power as the whole of Japan, but you're giggling and clapping along, so how bad can it really be? It's just poisoning the air and water to serve you nearly accurate information, when you could have had accurate information by googling it for a fraction of the energy cost.
I hate AI because I'm a responsible adult.
Lemmy loves artists who have their income threatened by AI because AI can make what they make at a substantially lower cost with an acceptable quality in a fraction of the time.
AI depends on being trained on the artistic works of others, essentially intellectual and artistic property theft, so that you can make an image of a fat anime JD Vance. Calling it plagiarism is a bit far, but it edges so hard that it leaks onto the balls and could cum with a soft breeze.
AI consumes massive amounts of energy, which is supplied through climate hostile means.
AI threatens to take countless office jobs, which are some of the better paying jobs in metropolitan areas where most people can't afford to live.
AI is a party trick, it is not comparable to human or an advanced AI. It is unimaginative and not creative like an actual AI. Calling the current state of AI anything like an advanced AI is like calling paint by numbers the result of artistry. It can rhyme, it can be like, but it can never be original.
I think that about sums it up.
The less tech-savvy of lemmy
Acceptable quality is a bit of a stretch in many cases... Especially with the hallucinations everywhere in generated text.
also because its just a way for big tech to harvest your data while stilling content from creators and destroying the planet
also because instead of actually innovating any more tech companies just jam ai slop in everything
Regarding the destruction of the planet, I think the world of Blade Runner is a great example of the future or is there a better one?
It is the coolest invention since the Internet and it is remarkable how close it can resemble actual consciousness.
No. It isn't. First and foremost, it produces a randomised output that it has learned to make look like other stuff on the Internet. It has as much to do with consciousness as a set of dice and the fact that you think it's more than that already shows how you don't understand what it is and what it does.
AI doesn't produce anything new. It doesn't reason, it isn't creative. As it has no understanding or experience, it doesn't develop or change. Using it to produce art shows a lack of understanding of what art is supposed to be or accomplish. AI only chews up what's being thrown at it to vomit it onto the Web, without any hint of something new. It also lacks understanding about the world, so asking it about decisions to be made is not only like asking an encyclopedia that comes up with answers on the fly based on whether they sound nice, regardless of the answers being correct, applicable or even possible.
And on top of all of this, on top of people using a bunch of statistical dice rolls to rob themselves of experiences and progress that they'd have made had they made their own decisions or learned painting themselves, it's an example of the "rules for thee, not for me". An industry that has lobbied against the free information exchange for decades, that sent lawyers after people who downloaded decades old books or movies for a few hours of private enjoyment suddenly thinks that there might be the possibility of profits around the corner, so they break all the laws they helped create without even the slightest bit of self-awareness. Their technology is just a hollow shell that makes the Internet unusable for all the shit it produces, but at least it isn't anything else. Their business model, however, openly declares that people are only second class citizens.
There you are. That's why I hate it. What's not to hate?
If you think that AI closely resembles a conscious flesh and blood human being you need to go outside more. That is a dangerous line of thinking and people are forming relationships with a shoddy simulacrum of humanity because of it. AI is still in it’s early conception and it’s only a matter of time before someone’s grok waifu convinces them to shoot up a school.
Lots of good points in the replies here, but I want to make the perhaps secondary point that the automation of thought is generally just bad for you. Dgmw AI (even LLMs) has its uses, but we're already seeing the atrophying effects on some people, and in my experience as a teacher I have seen a number of people for whom chat bot dependency has become a serious problem on a par with drug addiction. I dread to think what's going to happen to these people when we enter the 'jack up the prices' phase of the grift, let alone the 'have you considered product/voting X may solve your problems' phase, which is currently only being held back by engineering difficulties.
Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatt-hours in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatt-hours) and France (463 terawatt-hours), according to the Organization for Economic Co-operation and Development.
By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatt-hours (which would bump data centers up to fifth place on the global list, between Japan and Russia).
If your phone could do the ai trash, it would still be morally bankrupt and devoid of any humanity. But it's done on 5 gpus that eat as much energy as your entire household.
The real usefulness of ai technology is probably limited to 1% of what it is now. Signal processing, protein folding, translation and transcription are all fine and well. But it is 99% spam and so I'll judge the technology on that.
I myself despise capitalism, and would not like to see the current global ecological disaster worsen because some stupid-ass techbros forcing their shit on everyone.
Ai isn't inherently a bad thing. My issues are primarily with how it is used, how it is trained, and the resources it consumes. I also have concerns about it being a speculative bubble
LLMs and image generators are incredible inventions to be sure, but my main opposition to them is related to the very real negative outcomes of flooding our society with computer generated drivel.
- Small time artists are fucked. Anyone and everyone who could make money from small commissions is now out of a job, period. Even though generators will never be as good as real artists, the fact is that most people don't care and the generation is good enough. Oh yeah and real artist who are continuing to do real work regardless now have to live in a world where they can and will be accused of using the bots even when they aren't.
- Internet search is fucked. Search for an image and you'll have to sift through AI sites for the real thing, search on a topic and you'll be inundated with language model slop. Search music on sites like Spotify and certain genres are now swamped by "artists" who make an album a week of generated trash, making the already difficult problem of discoverability that much worse.
- People with certain kinds of susceptability to addiction are fucked. There are now countless people who feel that they are in love with a chat bot, because they suffer from modern loneliness and have tricked themselves into seeing a Mechanical Turk as a real person. There are also people who have turned a chatbot into an abusive cult figure, people who've amplified delusions with them, and other terrible mental health related outcomes that will only keep getting more common.
- The fact that these text generators are so easily confused for thinking machines means that a genuinely alarming number of people are now offloading their ability to think critically to the bots. An entire generation of students are graduating high school and college right now having learned literally nothing. Those systems weren't perfect before but this is definitely worse.
There's more stuff but I'll end this by saying that I've use an LLM to help me write code and it's pretty good at doing repetitive writing that has to strictly follow a certain format. Still need to understand code in order to read and troubleshoot its output though, which is why everything the so-called "vibe coders" make is so sloppy.
What's cool about it? How does it actually resemble conciousness?
I think AI is cool, but how people use it can be problematic.
-
Fraud. Its easy to over-represent the capabilities and sell a bullshit tool to people.
-
Spyware. They require shedloads of data to train so AI companies are doing whatever they can to get data on people.
-
Taking Jobs. This is an existential threat to entire professions.
-
Spam. LLMs are a bullshit factory, so spamming and astroturfing are easier than ever.
AI isn't the solution to everything, despite what some tech companies might want you to believe. Many companies are pushing AI into products where it's not particularly helpful, leading to frustration among users, and that's the sentiment you're picking up.
Specifically, the backlash is usually directed at LLMs and image-generating AIs. You don't hear people complaining about useful AI applications, like background blurring in Teams meetings. This feature uses AI to determine which parts of the image to blur and which to keep sharp, and it's a great example of AI being used correctly.
Signal processing is another area where AI excels. Cleaning audio signals with AI can yield amazing results, though I haven't heard people complain about this use. In fact, many might not even realize that AI is being used to enhance their audio experience.
AI is just a tool, and like any tool, it needs to be used appropriately. You just need to know when and how to use AI—and when to opt for other methods.
BTW even this text went through some AI modifiations. The early draft was a bit messy, I used an LLM to clean it up. As usual, the LLM went too far in some aspects, so I fixed the remaining issues manually.
ai is basically just a fancier google search that uses a shitload of energy and has no guarantee of accuracy. ai in movies is way more impressive and fantastical.
look at what happens when ai tries to actually do anything. for example self driving. it just sucks. its not actually intelligent at all.
So many places I could start when answering this question. I guess I'll just pick one.
It's a bubble. The hype is ridiculous. There's plenty of that hype in your post. The claims are that it'll revolutionize... well basically everything, really. Obsolete human coders. Be your personal secretary. Do your job for you.
Make no mistake. These narratives are being pushed for the personal benefit of a very few people at the expense of you and virtually everyone else. Nvidia and OpenAI and Google and IBM and so on are using this to make a quick buck. Just like TY capitalized on (and encouraged) a bubble back around the turn of the millennium that we now look back on with embarrassment.
In reality, the only thing AI is really effective as is a gimmicky "toy" that entertains as long as the novelty hasn't worn thin. There's very little real world application. LLM's are too unreliable at getting facts straight and not making up BS to be trusted for any real-world use case. Image generating "AI"'s like stable diffusion produce output (and by "produce output" I mean rip off artists) that all has a similar, fakey appearance with major, obvious errors which generally instantly identify it as low-effort "slop". Any big company that claims to be using AI in any serious capacity is lying either to you or themselves. (Possibly both.)
And there's no reason to think it's going to get better at anything, "AI industry" hype not withstanding. ChatGPT is not a step in the direction of general AI. It's a distraction from any real progress in that direction.
There's a word for selling something based on false promises. "Scam." It's all to hoodwink people into giving them money.
And it's convincing dumbass bosses who don't know any better. Our jobs are at risk. Not because AI can do your job just as well or better. But because your company's CEO is too stupid not to fall for the scam. By the time the CEO gets removed by the board for gross incompetence, it'll be too late for you. You will have already lost your job by then.
Or maybe your CEO knows full well AI can't replace people and is using "AI" as a pretense to lay you off and replace you with someone they don't have to pay as much.
Now before you come back with all kinds of claims about all the really real real-world applications of AI, understand that that's probably self-deception and/or hype you've gotten from AI grifters.
Finally, let me back up a bit. I took a course in college probably back in 2006 or so called "introduction to artificial intelligence". In that course, I learned about, among other things, the "A* algorithm". If you've ever played a video game where an NPC or enemy followed your character, the A* algorithm or some slight variation on it was probably at play. The A* algorithm is completely unlike LLMs, "generative AI", and whatever other buzzwords the AI grifting industry has come up with lately. It doesn't involve training anything on large data sets. It doesn't require a powerful GPU. When you give it a particular output, you can examine the algorithm to understand exactly why it did what it did, unlike LLMs which produce answers that can't be tracked down to what training data went into producing that particular response. The A* algorithm has been known and well-understood since 1968.
That kind of "AI" is fine. It's provably correct and has utility. Basically, it's not a scam. It's the shit that people pretend is the next step on the path to making a Commander Data -- or the shit that people trust blindly when its output shows up at the top of their Google search results -- that needs to die in a fire. And the sooner the better.
But then again, blockchain is still plaguing us after like 16 years. So I don't really have a lot of hope that enough average people are going to wizen up and see the AI scam for what it really is any time soon.
The future is bleak.
- How Generative AI is framed by the industry is an affront to humanistic values.
- It's not even close to consciousness. For those of us who understand how these things work it's almost an insult to make the comparison.
https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html
- All the other unethical considerations mentioned elsewhere by others (stealing art, water usage, politics).
But simply not liking a privacy conscious experience or utilization of AI at all? I'm not getting it?
I have heard zero people object to this extremely narrow definition of AI. This is an extremely fragile straw man that has no relationship with reality.
Don't get me wrong, I am absolutely anti "AI baked into my operating system and cell phone so that it can monitor me and sell me crap"
Well I’ve got bad news for you, that’s the only thing that the AI industry gives a shit about. That and pretending that they can replace human labor to justify wage depression and layoffs.
Also, you didn’t mention the excessive environmental impacts, or the fact that the industry is hemorrhaging money with no clear path to viability.
If you can’t think of at least a few reasons to be speculative about the current state of the AI industry then I’d go take a closer look at what’s actually going on.
The short answer is copyright theft, energy consumption, and job displacement. While all those issues are 100% correct, there is also a huge unspoken factor of "you must be against it otherwise you are a brainwashed idiot" because, let's be honest, the hive mind is real.
Most of the modern no-AI luddites fail to understand that AI has been around for decades in various forms and this is just the last, most visible incarnation. It is here to stay, and it will grow as well. At this rate of adoption, in a few years it will be as normal as having a mobile phone (they weren't around only 20 years ago).
My humble prediction is that all the concerns around AI will be addressed with time by better hardware, better cooling mechanism, better energy production, different jobs that will leverage AI instead of competing with it, and surely also the copyright will find a new balance (just like MP3, Napster, and Spotify did not kill the music industry).
By the way, AI doesn't spy on you. An AI model is immutable once it's trained. The software using the model is spying on you, but that's true for any scumbag-driven software you use. It is essentially the same typing your secrets in Google Docs or in ChatGPT.
At this rate of adoption, in a few years it will be as normal as having a mobile phone (they weren't around only 20 years ago)
First, mobile phones were extremely common in 2005 (20 years ago), even I had one, and I was literally a child.
Second, and this is the part I'm actually curious about: I wonder if there were people in the 80s and 90s (when mobile phones were actually rare, but becoming more common) who felt the same pure, visceral disgust for them that I feel for LLMs. I sort of suspect not, but I could be wrong, and I'd be curious to read anti-cell phone writing from that era, to see what people were worried about and whether those worries are in any way the same as the current worries I (and many others) have about LLMs.
Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~