43
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 04 Jul 2023
43 points (100.0% liked)
PC Gaming
12 readers
1 users here now
Discuss Games, Hardware and News on PC Gaming **Discord** https://discord.gg/4bxJgkY **Mastodon** https://cupoftea.social **Donate** https://ko-fi.com/cupofteasocial **Wiki** https://www.pcgamingwiki.com
founded 1 year ago
Because its very different what you refer to training your brain vs what is training an AI, which is basically photobashing stuff to the point of including watermarks from stuff they stole while scraping images they dont have the license to use.
I'm not sure about the photobashing thing.
If I remember correctly a generated image starts from noise and the AI refines that noise to form shapes. When watermarks show up, it's not because it's bashing the original images, but because it learnt to put watermark on an image.
I wish it was like that, and while its "true" that they "should" be starting purely from noise and form from what they have learnt the reality is that they end up using big chunks of pieces that are regularly found online.
A properly trained AI should not be able to do that. The data AI stores is not images or fragments of images. It's a set of weights for various attributes for each term. Like the concept of "cat" would be stored as the set of most common values for various attributes it analyzed and found to match on all training images labeled as "cat". With thousands of cat images as input, having proper variations between them, the result will always be unique.
It's the same as a child learning to draw. If they see a drawing of a cat, they might try to copy that as best they can. But if they see many different representations of a cat then they will also learn to express themselves creatively and make up their own variation. And nobody is going to sue the kid for having looked at copyrighted pictures of cats.