366
submitted 3 months ago by db0@lemmy.dbzer0.com to c/news@lemmy.world

A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

you are viewing a single comment's thread
view the rest of the comments
[-] ContrarianTrail@lemm.ee 48 points 3 months ago* (last edited 3 months ago)

How many corn dogs do you think were in the training data?

[-] Saledovil@sh.itjust.works 6 points 3 months ago

Wild corn dogs are an outright plague where I live. When I was younger, me and my buddies would lay snares to catch to corn dogs. When we caught one, we'd roast it over a fire to make popcorn. Corn dog cutlets served with popcorn from the same corn dog is popular meal, especially among the less fortunate. Even though some of the affluent consider it the equivalent to eating rat meat. When me pa got me first rifle when I turned 14, I spent a few days just shooting corn dogs.

[-] emmy67@lemmy.world -1 points 3 months ago* (last edited 3 months ago)

It didn't generate what we expect and know a corn dog is.

Hence it missed because it doesn't know what a "corn dog" is

You have proven the point that it couldn't generate csam without some being present in the training data

[-] ContrarianTrail@lemm.ee 1 points 3 months ago* (last edited 3 months ago)

I hope you didn't seriously think the prompt for that image was "corn dog" because if your understanding of generative AI is on that level you probably should refrain from commenting on it.

Prompt: Photograph of a hybrid creature that is a cross between corn and a dog

[-] emmy67@lemmy.world -1 points 3 months ago

Then if your question is "how many Photograph of a hybrid creature that is a cross between corn and a dog were in the training data?"

I'd honestly say, i don't know.

And if you're honest, you'll say the same.

[-] ContrarianTrail@lemm.ee 2 points 3 months ago

But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.

This is because it doesn't need to have been seen one before. It knows what corn looks like and it knows what a dog looks like so when you ask it to combine the two it will gladly do so.

[-] emmy67@lemmy.world 0 points 3 months ago* (last edited 3 months ago)

But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.

Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄

[-] ContrarianTrail@lemm.ee 1 points 3 months ago

And this proves that AI can't generate simulated CSAM without first having seen actual CSAM how, exactly?

To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.

[-] emmy67@lemmy.world 0 points 3 months ago

I wasn't the one attempting to prove that. Though I think it's definitive.

You were attempting to prove it could generate things not in its data set and i have disproved your theory.

To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.

To me, the takeaway is that you know less about ai than you claim. Much less. Cause we have actual instances and many where csam is in the training data. Don't believe me?

Here's a link to it

[-] ContrarianTrail@lemm.ee 1 points 3 months ago* (last edited 3 months ago)

You were attempting to prove it could generate things not in its data set and i have disproved your theory.

I don't understand how you could possibly imagine that pic somehow proves your claim. You've made no effort in trying to explain yourself. You just keep dodging my questions when I ask you to do so. A shitty photoshop of a "corn dog" has nothing to do with how the image I posted was created. It's a composite between a corn and a dog.

Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept. During its training, it was exposed to huge amounts of data, learning patterns, styles, and the relationships between them. When asked to generate something new, it draws on this learned knowledge to create a new image that fits the request, even if that exact combination wasn't in its training data.

Cause we have actual instances and many where csam is in the training data.

If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.

[-] emmy67@lemmy.world 0 points 3 months ago

Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept

If a human has never seen a dog before, they don't know what it is or what it looks like.

If it's the same as a human, it won't be able to draw one.

[-] ContrarianTrail@lemm.ee 1 points 3 months ago* (last edited 3 months ago)

And you continue to evade the questions challenging your argument.

How was the first ever painting of a dragon created? You couldn't possibly draw something you've never seen before, right?

[-] emmy67@lemmy.world 0 points 3 months ago

Once again you're showing the limits of AI. A dragon exists in fiction. It exists in the mind of someone drawing it. While in ai, there is no mind, the concept cannot independently exist.

[-] ContrarianTrail@lemm.ee 1 points 3 months ago

AI is not creating images in a vacuum. There is a person using it and that person does have a mind. You could come up with a brand new mythical creature right now, let's call it AI-saurus. If you ask it to create a picture of AI-saurus, it wouldn't be able to do so because it has no idea what it looks like. However what you could do is describe it to the AI and it'll output something that more or less resembles what you had in mind. What ever flaws you see in it you could correct for with a new, modified prompt and you keep doing this untill it produces something that matches the idea you had in mind. AI is like a police sketch artist; the outcome depends on how well you managed to describe the subject. The artist itself doesn't need to know what they looked like. They have a basic understanding of human facial anatomy and you're filling in the blanks. This is what generative AI does as well.

The people creating pictures of underage kids with AI are not asking for it to produce CSAM. It would most likely refuse to do so and may even report you. Instead, they're describing what they want the output to look like and they're arriving to the same end result by just using a different route.

[-] emmy67@lemmy.world 0 points 3 months ago* (last edited 3 months ago)

You're right, it's not. It needs to know what things look like. Which. Once again, it's not going to without knowing what those things look like. Sorry dude either csam is in the training data and can do this. Or it's not. But I'm pretty tired of this. Later fool

this post was submitted on 26 Aug 2024
366 points (97.2% liked)

News

23409 readers
1949 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS