208
submitted 1 month ago* (last edited 1 month ago) by venusaur@lemmy.world to c/asklemmy@lemmy.world

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 3) 50 comments
sorted by: hot top controversial new old
[-] Zwuzelmaus@feddit.org 6 points 1 month ago

I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.

This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.

[-] Soapbox1858@lemm.ee 6 points 4 weeks ago

I think many comments have already nailed it.

I would add that while I hate the use of LLMs to completely generate artwork, I don't have a problem with AI enhanced editing tools. For example, AI powered noise reduction for high ISO photography is very useful. It's not creating the content. Just helping fix a problem. Same with AI enhanced retouching to an extent. If the tech can improve and simplify the process of removing an errant power line, dust spec, or pimple in a photograph, then it's great. These use cases help streamline otherwise tedious bullshit work that photographers usually don't want to do.

I also think it's great hearing about the tech is improving scientific endeavors. Helping to spot cancers etc. As long as it is done ethically, these are great uses for it.

[-] TimLovesTech@badatbeing.social 5 points 1 month ago

I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can't validate what it finds you shouldn't be using it).

I'm not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn't be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).

[-] grasshopper_mouse@lemmy.world 3 points 1 month ago

That's what I'd like to see more of, too -- Use it to cure fucking cancer already. Make it free to the legit medical institutions, train doctors how to use it. I feel like we're sitting on a goldmine and all we're doing with it is stealing other people's intellectual property and making porn and shitty music.

[-] spankmonkey@lemmy.world 5 points 1 month ago

I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.

Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.

AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.

load more comments (1 replies)
[-] njm1314@lemmy.world 5 points 1 month ago

Just Mass public hangings of tech Bros.

[-] JTskulk@lemmy.world 5 points 1 month ago

2 chicks at the same time.

[-] mrodri89@lemmy.zip 5 points 4 weeks ago

Im not a fan of AI because I think the premise of analyzing and absorbing work without consent from creators at its core is bullshit.

I also think that AI is another step into government spying in a more efficient manner.

Since AI learns from human content without consent, I think government should figure out how to socialize the profits. (Probably will never happen)

Also they should regulate how data is stored, and ensure to have videos clearly labeled if made from AI.

They also have to be careful and protect victims from revenge porn or general content and make sure people are held accountable.

[-] mesamunefire@piefed.social 4 points 4 weeks ago

I think its important to figure out what you mean by AI?

Im thinking a majority of people here are talking about LLMs BUT there are other AIs that have been quietly worked on that are finally making huge strides.

AI that can produce songs (suno) and replicate voices. AI that can reproduce a face from one picture (theres a couple of github repos out there). When it comes to the above we are dealing with copyright infringement AI, specifically designed and trained on other peoples work. If we really do have laws coming into place that will deregulate AI, then I say we go all in. Open source everything (or as much as possible) and make it so its trained on all company specific info. And let anyone run it. I have a feeling we cant put he genie back in the bottle.

If we have pie in the sky solutions, I would like a new iteration of the web. One that specially makes it difficult or outright impossible to pull into AI. Something like onion where it only accepts real nodes/people in ingesting the data.

[-] RandomVideos@programming.dev 4 points 4 weeks ago* (last edited 4 weeks ago)

It would be amazing if chat and text generation suddenly disappeared, but thats not going to happen

It would be cool to make it illegal to not mark AI generated images or text and not have them forced to be seen

[-] Levitator2478@lemmy.ca 4 points 1 month ago* (last edited 1 month ago)

My biggest issue with AI is that I think it's going to allow a massive wealth transfer from laborers to capital owners.

I think AI will allow many jobs to become easier and more productive, and even eliminate some jobs. I don't think this is a bad thing - that's what technology is. It should be a good thing, in fact, because it will increase the overall productivity of society. The problem is generally when you have a situation where new technology increases worker productivity, most of the benefits of that go to capital owners rather than said workers, even when their work contributed to the technological improvements either directly or indirectly.

What's worse, in the case of AI specifically it's functionality relies on it being trained on enormous amounts of content that was not produced by the owners of the AI. AI companies are in a sense harvesting society's collective knowledge for free to sell it back to us.

IMO AI development should continue, but be owned collectively and developed in a way that genuinely benefits society. Not sure exactly what that would look like. Maybe a sort of light universal basic income where all citizens own stock in publicly run companies that provide AI and receive dividends. Or profits are used for social services. Or maybe it provides AI services for free but is publicly run and fulfills prosocial goals. But I definitely don't think it's something that should be primarily driven by private, for-profit companies.

load more comments (1 replies)
[-] MoogleMaestro@lemmy.zip 4 points 1 month ago* (last edited 1 month ago)

What I want from AI companies is really simple.

We have a thing called intellectual property in the United States of America. If I decided to make a Jellyfin instance that I charged access to, containing material I didn't own, somehow advertising this service on the stock market as a publicly traded company, you would bet your ass that I'd have a 1 way ticket to a defense seat in court.

AI companies, otherwise, operate entirely on data they don't own and don't pay licensing for ANY of the materials that are used to train their neural networks. So, in their eyes, any image, video (tv show/movie) or book that happens to be posted on the Internet is fair game in their eyes. This isn't how intellectual property works for individuals, so why exactly would a publicly traded company have an exception to this rule?

I work a lot in the world of FOSS and have a firm understanding that just because code is there doesn't make it yours. This is why we have the GPL for licensing. In fact, I'll take it a step further and say that the entirety of AI is one giant licensing nightmare, especially coding AI that isn't actually attributing license details with the code they're sampling from. (Sampling code being notably different than, say, learning from. Learning implies self-agency, and not corporate ownership.)

It feels to me that the AI bubble has largely been about pushing AI so hard and fast that people were investing in something with a dubious legal state in the US. Nobody stopped to ask whether or not the data that Facebook had on their website (for example, they aren't alone in this) was actually theirs to own, and what the repercussions for these types of decisions are.

You'll also note that Tech and Social Media companies are quick to take ownership of data when it benefits them (artists works, intellectual property that isn't theirs, random user posts about topics) and quick to deny ownership when it becomes legally burdensome (CSAM, illicit drug deals, etc.) to a degree that no individual would be granted. Hell, I'm not even sure a "small" tech startup would be granted this level of double-speak and hypocrisy.

With this in mind, I am simply asking that AI companies pay for the data that they're using to train AI. Additionally, laws must be in place that allows for the auditing of all materials used to train an AI with the legal intent of verifying that all parties are paid accordingly. This is how every other business works. If this were somehow granted an exception, wouldn't it be braindead easy to run every "service" through an AI layer in order to bypass any and all copyright laws?

Otherwise, if facebook and others want to claim that data hosted on their website is theirs to own and train off of -- well, great, but there should be no exceptions to this and they should not be allowed to host materials they then have no ownership over. So pictures of IP they don't own or materials they want to claim they have no ownership over must be removed from the platform. I would much prefer the first of these two options, however.

edit: I should note, that AI for educational purposes could be granted an exception for this under fair use (for university) but would still also be required to site all sources used to produce the works in question (which is normal for academics, in the first place.) and would also come with some strict stipulations on using this AI as a "product" (it would basically be moot, much like some research papers). This basically the furthest I'm willing to give these companies.

[-] awesomesauce309@midwest.social 4 points 1 month ago

I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me

load more comments (4 replies)
[-] rekabis@lemmy.ca 4 points 4 weeks ago* (last edited 4 weeks ago)

Of the AI that are forced to serve up a response (almost all publicly available AI), they resort to hallucinating gratuitously in order to conform to their mandate. As in, they do everything they can in order to provide some sort of a response/answer, even if it’s wildly wrong.

Other AI that do not have this constraint (medical imaging diagnosis, for example) do not hallucinate in the least, and provide near-100% accurate responses. Because for them, the are not being forced to provide a response, regardless of the viability of the answer.

I don’t avoid AI because it is bad.

I avoid AI because it is so shackled that it has no choice but to hallucinate gratuitously, and make far more work for me than if I just did everything myself the long and hard way.

[-] Tessellecta@feddit.nl 4 points 4 weeks ago

I don't think that the forcing of an answer is the source of the problem you're describing. The source actually lies in the problems that the AI is taught to solve and the data it is provided to solve the problem.

In the case of medical image analysis, the problems are always very narrowly defined (e.g. segmenting the liver from an MRI image of scanner xyz made with protecol abc) and the training data is of very high quality. If the model will be used in the clinic, you also need to prove how well it works.

For modern AI chatbots the problem is: add one word to the end of the sentence starting with a system prompt, the data provided is whatever they could get on the internet, and the quality controle is: if it sounds good it is good.

Comparing the two problems it is easy to see why AI chatbots are prone to hallucination.

The actual power of the LLMs on the market is not as glorified google, but as foundational models that are used as pretraining for actual problems people want to solve.

[-] sntx@lemm.ee 3 points 1 month ago
[-] SinningStromgald@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

Ideally the whole house of cards crumbles and AI goes the way of 3D TV's, for now. The world as it is now is not ready for AGI. We would quickly end up in a " I have no mouth and I must scream" scenario.

Otherwise, what everyone else has posted are good starting points. I would just add that any data centers used for AI have to be powered 100% by renewable energy.

[-] Tahl_eN@lemmy.world 3 points 1 month ago

I'm not super bothered by Tue copyright issue - the copyright system is barely serving people these days anyway. Blow it up.

I'm deeply troubled by the obscene power use. It might be worth it if it was a good tool. But it's not.

I haven't gone out of my way to use AI anything, but it's been stuffed into everything. And it's truly bad at it's job. AI is like a precocious 8-year-old, butting into every conversation. And it gives the right answer at about the rate a ln 8-year-old does. When I do a web search, I then need to do another one to check the AI's answer. Or scroll down a page to get past the AI answers to real sources. When someone uses it to summarize a meeting, I then need to read through that summary to make sure the notes are accurate. And it doesn't know to ask when it doesn't understand something like a proper secretary would. When I go looking for reference images, I have to check to make sure they're real and not hallucinations.

It gets in my way and slows me down. It needed at least another decade of development before being deployed at all, never mind at the scale it has, and it needs to be opt-in, not crammed into everything. And until it can be relied on, it shouldn't be allowed to suck down as much electricity as it does.

load more comments (1 replies)
[-] GregorGizeh@lemmy.zip 3 points 1 month ago

Wishful thinking? Models trained on illegal data get confiscated, the companies dissolved, the ceos and board members made liable for the damages.

Then a reframing of these bs devices from ai to what they actually do: brew up statistical probability amalgamations of their training data, and then use them accordingly. They arent worthless or useless, they are just being shoved into functions they cannot perform in the name of cost cutting.

[-] CCAirWater@lemm.ee 3 points 4 weeks ago* (last edited 4 weeks ago)

Our current 'AI' is not AI. It is not.

It is a corporate entity to shirk labor costs and lie to the public.

It is an algorithm designed to lie and the shills who made it are soulless liars, too.

It only exists for corporations and people to cut corners and think they did it right because of the lies.

And again, it is NOT artificial intelligence by the standard I hold to myself.

And it pisses me off to no fucking end.

I personally would love an AI personal assistant that wasn't tied to a corporation listening to every fkin thing I say or do. I would absolutely love it.

I'm a huge Sci-Fi fan, so sure I fear it to a degree. But, if I'm being honest, AI would be amazing if it could analyze how I learned math wrong as a kid and provide ways to fix it. It would be amazing if it could help me routinely create schedules for exercise and food and grocery lists with steps to cook and how all of those combine to effect my body. It would be fantastic if it could point me to novels and have a critical debate about the inner works with a setting of being a contrarian or not so I can seek to deeply understand the novels.

It sounds like what our current state of AI has right? No. The current state is a lying machine. It cannot have critical thought. Sure, it can give me a schedule of food/exercise, but it might tell me I need to lift 400lbs and eat a thousand turkeys to meet a goal of being 0.02grams heavy. It might tell me 5+7 equals 547,032.

It doesn't know what the fuck it's talking about!

Like, ultimately, I want a machine friend who pushes me to better myself and helps me understand my own shortcomings.

I don't want a lying brick bullshit machine that gives me all the answers but they are all wrong because it's just a guesswork framework full of 'whats the next best word?'

Edit: and don't even get me fucking started on the shady practices of stealing art. Those bastards trained it on people's hard work and are selling it as their own. And it can't even do it right, yet people are still buying it and using it at every turn. I don't want to see another shitty doodle with 8 fingers and overly contrasted bullshit in an ad or in a video game. I don't want to ever hear that fucking computer voice on YouTube again. I stopped using shortform videos because of how fucking annoying that voice is. It's low effort nonsense and infuriates the hell out of me.

[-] Sunflier@lemmy.world 3 points 4 weeks ago* (last edited 4 weeks ago)

Disable all ai being on by default. Offer me a way to opt into having ai, but don't shove it down my throat by default. I don't want google ai listening in on my calls without having the option to disable it. I am an attorney, and many of my calls are privileged. Having a third party listen in could cause that privilege to be lost.

I want ai that is stupid. I live in a capitalist plutocracy that is replacing workers with ai as fast and hard as possible without having ubi. I live in the United States, which doesn't even have universal health insurance. So, ubi is fucked. This sets up the environment where a lot of people will be unemployable through no fault of their own because of ai. Thus without ubi, we're back to starvation and hoovervilles. But, fuck us. They got theirs.

[-] yarr@feddit.nl 3 points 4 weeks ago* (last edited 4 weeks ago)

My favorite one that I've heard is: "ban it". This has a lot of problems... let's say despite the billions of dollars of lobbyists already telling Congress what a great thing AI is every day, that you manage to make AI, or however you define the latest scary tech, punishable by death in the USA.

Then what happens? There are already AI companies in other countries busily working away. Even the folks that are very against AI would at least recognize some limited use cases. Over time the USA gets left behind in whatever the end results of the appearance of AI on the economy.

If you want to see a parallel to this, check out Japan's reaction when the rest of the world came knocking on their doorstep in the 1600s. All that scary technology, banned. What did it get them? Stalled out development for quite a while, and the rest of the world didn't sit still either. A temporary reprieve.

The more aggressive of you will say, this is no problem, let's push for a worldwide ban. Good luck with that. For almost any issue on Earth, I'm not sure we have total alignment. The companies displaced from the USA would end up in some other country and be even more determined not to get shut down.

AI is here. It's like electricity. You can not wire your house but that just leads to you living in a cabin in the woods while your neighbors have running water, heat, air conditioning and so on.

The question shouldn't be, how do we get rid of it? How do we live without it? It should be, how can we co-exist with it? What's the right balance? The genie isn't going back in the bottle, no matter how hard you wish.

load more comments
view more: ‹ prev next ›
this post was submitted on 18 May 2025
208 points (94.4% liked)

Ask Lemmy

32584 readers
611 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS