Lemmy users seem to overwhelmingly despise AI slop, which makes me wonder why my more communities don't have rules against it. I would love to be able to report it and have it removed.
A lot of mods probably think it goes without saying. A lot of mods maybe have the mentality of not adding too many rules until something becomes a problem. A noticeable minority of mods just don't care about ai.
I would think that it would only apply to AI generated images, but I suppose it would depend on the community. In this comm in particular in which all the posts are images it shouldn't be too tricky to define. As the technology advances it might eventually be impossible to spot them, though...
Large Language Models generate human-like text. They operate on words broken up as tokens and predict the next one in a sequence. Image Diffusion models take a static image of noise and iteratively denoise it into a stable image.
The confusion comes from services like OpenAI that take your prompt, dress it up all fancy, and then feed it to a diffusion model.
Dude, it doesn't know what it's looking at. It isn't intelligent. It's just a prediction algorithm called LLMs. It doesn't matter if it's predicting text or pixels. It's all LLMs.
LLMs aren't generating the images, when "using an LLM for image generation" what's actually happening is the LLM talking to an image generation model and then giving you the image.
Ironically there's a hint of truth in it though because for text-to-image generation the model does need to map words into a vector space to understand the prompt, which is also what LLMs do. (And I don't know enough to say whether the image generation offered through LLMs just has the LLM provide the vectors directly to the image gen model rather than providing a prompt text).
You could also consider the whole thing as one entity in which case it's just more generalized generative AI that contains both an LLM and an image gen model.
I think it would be better to have rules against very low effort or unsourced/potentially misleading content, not all possible uses of AI are necessarily "slop" and the witch hunting of even non-AI stuff can get pretty bad.
The second is that moderation for details like that gets complex.
If it bothers you enough, you can still report it even if there's not a set rule. Enough people do that and it will be clear a rule or more enforcement is needed. Doesn't change the points above, it's not as easy as it was only a few years ago.
Already is I think. The term "AI slop" is thrown around so much that sometimes it seems to be used to mean "I don't like this" and not anything AI related.
It's akin to a religion here. Of course no one wants to read an AI generated article because it's low quality and unreliable, but if someone uses an AI generated image to convey something, the bitching is more axe grinding than meaningful. The same people that plug some text in a meme generator criticize any AI image. It's utility is pretty limited imo, but no usage here will ever be judged by its utility, only by if the cultural code is ahered to or not.
What are they trying to convey that can't be done by either word or meme? What is at stake for not using existing meme or creating one using screenshot of media? Why must it be generated and wasting immense power and encourage more datacenter to be build and purposefully used for AI to the detriment of people living near one?
I dunno man. Are you trying to say it's wasteful and unnecessary? Far from the only thing in this world that is. Go ahead and throw the first stone if you live without any waste or frivolity.
Nothing short of organized religion is this fricken judgmental.
Yes. Modern society is wasteful doesn't mean we have to add another one. Go and tell the court that you should be able to lie because everyone lie once in a while, see how it stand.
Are you really not aware that there's tons of campaigns trying to cut back existing waste or are you really that dense? What so "novel" about image generation?
Lemmy users seem to overwhelmingly despise AI slop, which makes me wonder why my more communities don't have rules against it. I would love to be able to report it and have it removed.
AI slop is for Grandma to not question on Facebook.
A lot of mods probably think it goes without saying. A lot of mods maybe have the mentality of not adding too many rules until something becomes a problem. A noticeable minority of mods just don't care about ai.
A ton of communities on Lemmy are also moderated by users who are rarely, if ever, online.
We should depose them. Except I don't wanna be a lemmy mod.
You are able to report it! I will look into it, and if I am 100% sure it is AI slop I will remove it.
There's a lot of pro-AI trolls, especially on dbzero
Trolls aren't people whose views you disagree with.
They can be ๐
Is there an accepted definition or is it just all AI generated content?
I saw someone here get hassled about AI slop for posting over sharpened screen shots of Buffy the Vampire Slayer.
I mean, it was sharpened with AI, so it was at least a factor.
But it wasn't AI generated, so it wasn't slop.
Seems fair TBH
Edit: downvotes for agreeing with something that's not downvoted? Dafuq?
I just downvote anyone bitching about downvotes regardless of how i feel about the content of their comment. lol
You think it's fair to get hassled for AI slop because you posted a real screen shot of a TV show?
For slopping all over it, yes.
You aren't agreeing with their comment.
They say it is dumb to get downvoted for posting a picture that wasn't generated by AI.
So it is literally the opposite of your comment.
I would think that it would only apply to AI generated images, but I suppose it would depend on the community. In this comm in particular in which all the posts are images it shouldn't be too tricky to define. As the technology advances it might eventually be impossible to spot them, though...
LLM generated content in general; images, comments, etc.
I'm not sure language models are capable of generating images
No. LLMs are still what generates images.
Large Language Models generate human-like text. They operate on words broken up as tokens and predict the next one in a sequence. Image Diffusion models take a static image of noise and iteratively denoise it into a stable image.
The confusion comes from services like OpenAI that take your prompt, dress it up all fancy, and then feed it to a diffusion model.
You can't use LLMs to generate images.
That is a completely different beast with their own training set.
Just because both are made by machine learning, doesn't mean they are the same.
I think the term you're looking for is "generative AI"
Nope. LLMs are still what's used for image generation. They aren't AI though, so no.
Which part of the image is language?
Dude, it doesn't know what it's looking at. It isn't intelligent. It's just a prediction algorithm called LLMs. It doesn't matter if it's predicting text or pixels. It's all LLMs.
https://botpenguin.com/blogs/comparing-the-best-llms-for-image-generation
You can generate images without ever using any text. By uploading and combining images to create new things.
No LLM will be used in that context.
Holy confidently incorrect
LLMs aren't generating the images, when "using an LLM for image generation" what's actually happening is the LLM talking to an image generation model and then giving you the image.
Ironically there's a hint of truth in it though because for text-to-image generation the model does need to map words into a vector space to understand the prompt, which is also what LLMs do. (And I don't know enough to say whether the image generation offered through LLMs just has the LLM provide the vectors directly to the image gen model rather than providing a prompt text).
You could also consider the whole thing as one entity in which case it's just more generalized generative AI that contains both an LLM and an image gen model.
What do you think LLM stands for?
Large Language Image
Yes.
I think it would be better to have rules against very low effort or unsourced/potentially misleading content, not all possible uses of AI are necessarily "slop" and the witch hunting of even non-AI stuff can get pretty bad.
The main problem is defining it.
The second is that moderation for details like that gets complex.
If it bothers you enough, you can still report it even if there's not a set rule. Enough people do that and it will be clear a rule or more enforcement is needed. Doesn't change the points above, it's not as easy as it was only a few years ago.
It will become witch hunting with time
Already is I think. The term "AI slop" is thrown around so much that sometimes it seems to be used to mean "I don't like this" and not anything AI related.
The important difference being that there's a lot of witches in this witch hunt, unlike the historical context which spawned the phrase.
But still it means innocent artists will get hurt and some of them will might lost interest in making art
Unless it porn. (I don't support it)
Edit: i support the porn, not the ai porn.
It's akin to a religion here. Of course no one wants to read an AI generated article because it's low quality and unreliable, but if someone uses an AI generated image to convey something, the bitching is more axe grinding than meaningful. The same people that plug some text in a meme generator criticize any AI image. It's utility is pretty limited imo, but no usage here will ever be judged by its utility, only by if the cultural code is ahered to or not.
What are they trying to convey that can't be done by either word or meme? What is at stake for not using existing meme or creating one using screenshot of media? Why must it be generated and wasting immense power and encourage more datacenter to be build and purposefully used for AI to the detriment of people living near one?
I dunno man. Are you trying to say it's wasteful and unnecessary? Far from the only thing in this world that is. Go ahead and throw the first stone if you live without any waste or frivolity. Nothing short of organized religion is this fricken judgmental.
Yes. Modern society is wasteful doesn't mean we have to add another one. Go and tell the court that you should be able to lie because everyone lie once in a while, see how it stand.
So existing wastefulness is ok but not novel wastefulness?
You can make as many paper thin arguments as you'd like, but it's just pious righteousness and nothing more.
Are you really not aware that there's tons of campaigns trying to cut back existing waste or are you really that dense? What so "novel" about image generation?
Bitch, that is a whole 'nother sentence.
Me when my favorite thing is unpopular: "The lemmy hivemind is a religion, and I am its lone athiest!"
It isn't in most cases. But it is definitely the case with Linux, Communism, anti-car, and especially anti-AI, at least here on Lemmy.