64
Is It Just Me? (lemmy.world)
(page 2) 50 comments
sorted by: hot top controversial new old
[-] riskable@programming.dev 1 points 3 weeks ago* (last edited 3 weeks ago)

I can't take anyone seriously that says it's "trained on stolen images."

Stolen, you say? Well, I guess we're going to have to force those AI companies to put those images back! Otherwise, nobody will be able to see them!

...because that's what "stolen" means. And no, I'm not being pendantic. It's a really fucking important distinction.

The correct term is, "copied" but that doesn't sound quite as severe. Also, if we want to get really specific, the images are presently on the Internet. Right now. Because that's what ImageNET (and similar) is: A database of URLs that point to images that people are offering up for free to anyone that wants on the Internet.

Did you ever upload an image anywhere publicly, for anyone to see? Chances are someone could've annotated it and included it in some AI training database. If it's on the Internet, it will be copied and used without your consent or knowledge. That's the lesson we learned back in the 90s and if you think that's not OK then go try to get hired by the MPAA/RIAA and you can try to bring the world back to the time where you had to pay $10 for a ringtone and pay again if you got a new phone (because—to the big media companies—copying is stealing!).

Now that's clear, let's talk about the ethics of training an AI on such data: There's none. It's an N/A situation! Why? Because until the AI models are actually used for any given purpose they're just data on a computer somewhere.

What about legally? Judges have already ruled in multiple countries that training AI in this way is considered fair use. There's no copyright violation going on... Because copyright only covers distribution of copyrighted works, not what you actually do with them (internally; like training an AI model).

So let's talk about the real problems with AI generators so people can take you seriously:

  • Humans using AI models to generate fake nudes of people without their consent.
  • Humans using AI models to copy works that are still under copyright.
  • Humans using AI models to generate shit-quality stuff for the most minimal effort possible, saying it's good enough, then not hiring an artist to do the same thing.

The first one seems impossible to solve (to me). If someone generates a fake nude and never distributes it... Do we really care? It's like a tree falling in the forest with no one around. If they (or someone else) distribute it though, that's a form of abuse. The act of generating the image was a decision made by a human—not AI. The AI model is just doing what it was told to do.

The second is—again—something a human has to willingly do. If you try hard enough, you can make an AI image model get pretty close to a copyrighted image... But it's not something that is likely to occur by accident. Meaning, the human writing the prompt is the one actively seeking to violate someone's copyright. Then again, it's not really a copyright violation unless they distribute the image.

The third one seems likely to solve itself over time as more and more idiots are exposed for making very poor decisions to just "throw it at the AI" then publish that thing without checking/fixing it. Like Coca Cola's idiotic mistake last Christmas.

load more comments (1 replies)
[-] unconsequential@slrpnk.net 1 points 3 weeks ago

Have you heard of these things called humans? I think this is more a reflection of them. Books ate trees and corrupted the youth, tv rotted your brain and made you go blind, the internet made people lazy. Wait until I tell you about gasp auto-correct or better yet leet speak! The horror. Clearly we are never recovering from either of those. In fact, I’m speaking to you now in emojis. And wait until you learn about clutches pearls Wikipedia— ah the horror!

Is tech and its advancements perfect? No. Can people do better? Yes. Are criticisms important? Sure are. But panic and fighting a rising tech? You’re probably not going to win.

Spend time educating people on how to be more ethical with their tech use and absolutely pressuring companies to do the same. Taking a club to a computer didn’t stop the rise of the word processor or the spread of Wikipedia madness. But we can control how we consume and relate to tech and what our demands of their creators are.

PS— do you even know how to read and write cursive? > punchable smug face goes here. <

[-] a9cx34udP4ZZ0@lemmy.world 1 points 2 weeks ago

Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.

And people agree with me implicitly and tell me they've seen the same. But then don't hesitate to turn to AI on subjects they aren't experts in for "quick answers". These are not stupid people either. I just don't understand.

[-] LogicalDrivel@sopuli.xyz 1 points 3 weeks ago

My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.

[-] jabeez@lemmy.today 1 points 2 weeks ago

good enough for people to read

wow, what a standard, super professional look for your customers!

[-] skisnow@lemmy.ca 1 points 2 weeks ago

Spelling errors? That’s… unusual. Part of what makes ChatGPT so specious is that its output is usually immaculate in terms of language correctness, which superficially conceals the fact that it’s completely bullshitting on the actual content.

load more comments (1 replies)
load more comments (1 replies)
[-] nialv7@lemmy.world 1 points 2 weeks ago

The Luddites were right. Maybe we can learn a thing or two from them...

[-] drunkpostdisaster@lemmy.world 1 points 2 weeks ago

I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.

[-] ArcaneSlime@lemmy.dbzer0.com 1 points 2 weeks ago

They can try, but Papa Kaczynski lives forever in our hearts.

load more comments (1 replies)
[-] Adderbox76@lemmy.ca 1 points 2 weeks ago

The reason AI is wrong so often is because it's not programmed to give you the right answer. It's programmed to give you the most pervasive one.

LLMs are being fed by Reddit and other forums that are ostensibly about humans giving other humans answers to questions.

But have you been on those forums? It's a dozen different answers for every question. The reality is that we average humans don't know shit and we're just basing our answers on our own experiences. We aren't experts. We're not necessarily dumb, but unless we've studied, our knowledge is entirely anecdotal, and we all go into forums to help others with a similar problem by sharing our answer to it.

So the LLM takes all of that data and in essence thinks that the most popular, most mentioned, most upvoted answer to any given question must be the de facto correct one. It literally has no other way to judge; it's not smart enough to cross reference itself or look up sources.

[-] homesweethomeMrL@lemmy.world 1 points 2 weeks ago

It literally has no other way to judge

It literally does NOT judge. It cannot reason. It does not know what "words" are. It is an enormous rainbow table of sentence probability that does nothing useful except fool people and provide cover for capitalists to extract more profit.

But apparently, according to some on here, "that's the way it is, get used to it." FUCK no.

[-] drmoose@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.

I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.

Some people are really overreacting and everyone's just enabling them.

load more comments (9 replies)
[-] merdaverse@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.

It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 21 Aug 2025
64 points (94.4% liked)

Microblog Memes

9161 readers
250 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS