52
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 04 Oct 2023
52 points (100.0% liked)
art
22392 readers
105 users here now
A community for sharing and discussing art, aesthetics, and music relating to '80s, '90s, and '00s retro microgenres and also art in general now!
Some cool genres and aesthetics include:
- outrun
- vaporwave
- mallsoft
- future funk
- city pop
- synthwave
- laborwave
If you are unsure if a piece of media is on theme for this community, you can make a post asking if it fits. Discussion posts are encouraged, and particularly interesting topics will get pinned periodically.
No links to a store page or advertising. Links to bandcamps, soundclouds, playlists, etc are fine.
founded 4 years ago
MODERATORS
This issue is interesting, because it was noted that this particular Captain Marvel pose shows up duplicated in a at least one key AI dataset since it's not technically a duplicate (different posters or promo images), but because central figure is identical in so many of these images overfitting/memorization is pretty likely.
We don't know anything about DALLE-3 architecture wise (it has a LLM text encoder and it's almost certainly a latent diffusion models), but presumably it's a pretty big model so that can also increase the likelihood of overfitting.
Interesting. Just a clarification, overfitting and memorization are not quite the same thing to my understanding. Overfitting is when a model memorizes rather than generalizing, but very large models can and will do both. If you ask an image generator for "a reproduction of starry night by van gogh hanging on the wall", or a LLM to complete "to be or not to be, that is _" you are referring to something very specific that you'd like reproduced exactly. If the model outputs what you wanted you would call that memorization but not overfitting. Still you may want to suppress memorization and you certainly don't want overfitting. Side note, massively overparameterized models are better at both memorization and generalization and are naturally resistant to overfitting as I define it, that last thing would have surprised early ML researchers since they had noticed the opposite trend, but that trend reverses when you go large enough. Also, they will sometimes memorize on a single pass through the data, even if there's no duplication, which is quite remarkable.
That's a fair interpretation, although I still consider it a failure state. These models shouldn't be used as storage/retrieval tools.