208
submitted 1 month ago* (last edited 1 month ago) by venusaur@lemmy.world to c/asklemmy@lemmy.world

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] detun3d@lemm.ee 11 points 4 weeks ago

Gen AI should be an optional tool to help us improve our work and life, not an unavoidable subscription service that makes it all worse and makes us dumber in the process.

[-] HakFoo@lemmy.sdf.org 11 points 4 weeks ago

Stop selling it a loss.

When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody's going to want it.

[-] Goldholz@lemmy.blahaj.zone 11 points 4 weeks ago

Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.

[-] drmoose@lemmy.world 11 points 4 weeks ago

I generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.

The solution is easy and right there in front of everyone's eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.

This will literally solve every single problem people have other than energy use which is a fake problem to begin with.

[-] sweemoof@lemmy.world 10 points 4 weeks ago

The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.

As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.

When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)

Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.

[-] helpImTrappedOnline@lemmy.world 10 points 4 weeks ago

(Ignoring all the stolen work to train the models for a minute)

It's got its uses and potential, things like translations, writing prompts, or a research tool.

But all the products that force it in places that clearly do not need it and solving problems could be solved by two or three steps of logic.

The failed attempts at replacing jobs, screen resumes or monitoring employees is terrible.

Lastly the AI relationships are not good.

[-] Pulptastic@midwest.social 10 points 4 weeks ago

Reduce global resource consumption with the goal of eliminating fossil fuel use. Burning nat gas to make fake pictures that everyone hates is just the worst.

[-] Gradually_Adjusting@lemmy.world 10 points 4 weeks ago

Part of what makes me so annoyed is that there's no realistic scenario I can think of that would feel like a good outcome.

Emphasis on realistic, before anyone describes some insane turn of events.

load more comments (5 replies)
[-] OTINOKTYAH@feddit.org 9 points 4 weeks ago

Not destroying but being real about it.

It's flawed like hell and feeling like a hype to save big tech companies, while the the enduser getting a shitty product. But companies shoving it into apps and everything, even if it degrades the user expierence (Like Duolingo)

Also, yes there need laws for that. I mean, If i download something illegaly i will nur put behind bars and can kiss my life goodbye. If a megacorp doing that to train their LLM "it's for the greater good". That's bullshit.

[-] chonkyninja@lemmy.world 9 points 4 weeks ago

I’d like for it to be forgotten, because it’s not AI.

[-] SkaveRat@discuss.tchncs.de 10 points 4 weeks ago

It is. Just not agi

[-] naught101@lemmy.world 7 points 4 weeks ago

It's AI in so far as any ML is AI.

load more comments (1 replies)
[-] endeavor@sopuli.xyz 8 points 4 weeks ago* (last edited 4 weeks ago)

More regulation, supervised development, laws limiting training data to be consensual.

[-] SoftestSapphic@lemmy.world 8 points 4 weeks ago

I want the companies that run LLMs to be forced to pay for the copyrighted training data they stole to train their auto complete bots.

I want us to keep chipping away at actually creating REAL ARTIFICAL INTELLIGENCE, that can reason, understand self, and function autonomously, like living things. Marketing teams are calling everything AI but none of it is actually intelligent, it's just ok at sounding intelligent.

I want people to stop gaslighting themselves into thinking this autocomplete web searching bot is comparable to a human in any way. The difference between ChatGPT and Google's search congregation ML algorithm was the LLM on it that makes it sound like a person. But it only sounds like a person, it's nowhere close, but we have people falling in love and worshipping chat bots like gods.

Also the insane energy consumption makes it totally unsustainable.

TL;DR- AI needs to be actually intelligent, not marketing teams gaslighting us. People need to be taught that these things are nowhere close to human and won't be for a very long time despite it parroting human speech. And they are rapidly destroying the planet.

load more comments (1 replies)
[-] calcopiritus@lemmy.world 8 points 4 weeks ago

Energy consumption limit. Every AI product has a consumption limit of X GJ. After that, the server just shuts off.

The limit should be high enough to not discourage research that would make generative AI more energy efficient, but it should be low enough that commercial users would be paying a heavy price for their waste of energy usage.

Additionally, data usage consent for generative AI should be opt-in. Not opt-out.

load more comments (4 replies)
[-] Tehdastehdas@lemmy.world 8 points 4 weeks ago* (last edited 4 weeks ago)

We're making the same mistake with AI as we did with cars; not planning human future.

Cars were designed to atrophy muscles, and polluted urban planning and the air.
AI is being designed to atrophy brains, and pollutes the air, the internet, public discourse, and more to come.

We should change course towards AI that makes people smarter, not dumber: AI-aided collaborative thinking.
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-Hiltunen

[-] HeartyOfGlass@lemm.ee 8 points 4 weeks ago

My fantasy is for "everyone" to realize there's absolutely nothing "intelligent" about current AI. There is no rationalization. It is incapable of understanding & learning.

ChatGPT et al are search engines. That's it. It's just a better Google. Useful in certain situations, but pretending it's "intelligent" is outright harmful. It's harmful to people who don't understand that & take its answers at face value. It's harmful to business owners who buy into the smoke & mirrors. It's harmful to the future of real AI.

It's a fad. Like NFTs and Bitcoin. It'll have its die-hard fans, but we're already seeing the cracks - it's absorbed everything humanity's published online & it still can't write a list of real book recommendations. Kids using it to "vibe code" are learning how useless it is for real projects.

Legislation

[-] mad_djinn@lemmy.world 7 points 4 weeks ago

force companies to pay for the data they scraped from copyrighted works. break up the largest tech conglomerates so they cannot leverage their monopolistic market positions to further their goals, which includes the investment in A.I. products.

ultimately, replace the free market (cringe) with a centralized computer system to manage resource needs of a socialist state

also force Elon Musk to receive a neuralink implant and force him to hallucinate the ghostly impressions of spongebob squarepants laughing for the rest of his life (in prison)

[-] DomeGuy@lemmy.world 7 points 4 weeks ago

Honestly, at this point I'd settle for just "AI cannot be bundled with anything else."

Neither my cell phone nor TV nor thermostat should ever have a built-in LLM "feature" that sends data to an unknown black box on somebody else's server.

(I'm all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they're neighbors with Napster.)

[-] SuperNovaStar@lemmy.blahaj.zone 7 points 4 weeks ago

AI overall? Generally pro. LLMs and generative AI, though, I'm "against", mostly meaning that I think it's misused.

Not sure what the answer is, tbh. Reigning in corporations would be good.

I do think we as a society need to radically alter our relationship to IP law. Right now we 'enforce' IP law in a way that benefits corporations but not individuals. We should either get rid of IP law altogether (which would protect people from corporations abusing the laws) or we should enforce it more strictly, and actually hold corporations accountable for breaking it.

If we fixed that, I think gen AI would be fine. But we aren't doing that.

[-] some_guy@lemmy.sdf.org 6 points 4 weeks ago

I want OpenAI to collapse.

load more comments (1 replies)
[-] Retro_unlimited@lemmy.world 6 points 4 weeks ago

I was pro AI in the past, but seeing the evil ways these companies use AI just disgusts me.

They steal their training data, and they manipulate the algorithm to manipulate the users. It’s all around evil how the big companies use AI.

[-] BackgrndNoize@lemmy.world 6 points 4 weeks ago

Make it unprofitable for the companies peddling it, by passing laws that curtail its use, by suing them for copyright infringement, by social shaming and shitting on AI generated anything on social media and in person and by voting with your money to avoid anything that is related to it

[-] Zwuzelmaus@feddit.org 6 points 4 weeks ago

I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.

This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.

[-] Soapbox1858@lemm.ee 6 points 4 weeks ago

I think many comments have already nailed it.

I would add that while I hate the use of LLMs to completely generate artwork, I don't have a problem with AI enhanced editing tools. For example, AI powered noise reduction for high ISO photography is very useful. It's not creating the content. Just helping fix a problem. Same with AI enhanced retouching to an extent. If the tech can improve and simplify the process of removing an errant power line, dust spec, or pimple in a photograph, then it's great. These use cases help streamline otherwise tedious bullshit work that photographers usually don't want to do.

I also think it's great hearing about the tech is improving scientific endeavors. Helping to spot cancers etc. As long as it is done ethically, these are great uses for it.

load more comments
view more: ‹ prev next ›
this post was submitted on 18 May 2025
208 points (94.4% liked)

Ask Lemmy

32563 readers
1047 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS