1

It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.

Like the comments on this post here.

https://sh.itjust.works/post/6220815

I find this argument crazy. I don't even know where to begin to talk about how many ways this will go wrong.

My views ( which are apprently not based in fact) are that AI CSAM is not really that different than "Actual" CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.

Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.

Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.

Using drugs has no inherent victim. And it is not predatory.

I could go on but im not an expert or a social worker of any kind.

Can anyone link me articles talking about this?

you are viewing a single comment's thread
view the rest of the comments
[-] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 11 months ago)

[This comment has been deleted by an automated system]

[-] Killing_Spark@feddit.de 0 points 1 year ago

You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):

Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.

You do have some interesting details though:

Training a model on real CSAM is bad, because it adds the likeness of the original victims to the image model. However, you don’t need CSAM in your training set to generate it.

This has been mentioned a few times, mostly with the idea of mixing "normal" children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.

As far as I can tell, we have no good research in favour of or against allowing automated CSAM. I expect it’ll come out in a couple of years. I also expect the research will show that the net result is a reduction in harm. I then expect politicians to ignore that conclusion and try to ban it regardless because of moral outrage.

This is more or less my expectation too, but I wouldn't count on the research coming out in a few years. There isn't much incentive to do actual research on the topic afaik. There isn't much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.

[-] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 11 months ago)

[This comment has been deleted by an automated system]

[-] Killing_Spark@feddit.de 0 points 1 year ago

It’s not even an idea, it’s how you get CSAM out of existing models

I didn't know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn't know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that's even more horrifying.

It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.

Didn't even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.

[-] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 11 months ago)

[This comment has been deleted by an automated system]

this post was submitted on 05 Oct 2023
1 points (60.0% liked)

Unpopular Opinion

6216 readers
4 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS