10
submitted 1 year ago by ksynwa@lemmygrad.ml to c/technology@lemmy.ml

Pls explain

top 10 comments
sorted by: hot top controversial new old
[-] jet@hackertalks.com 32 points 1 year ago

From a pixel generation perspective, what is most likely to be next to a finger? Another Finger! So.... mississississississississississippi in a mathematical model.

[-] Vlyn@lemmy.zip 14 points 1 year ago

Why are humans so bad with drawing hands?

They are tough, AI isn't building a logical model of a human when drawing them. It's more like taking a best guess where pixels should go. So it's not "thinking": Alright, drawing a human, human has two hands, each hand has five fingers, the fingers are posed like this, ..

It's drawing a human, so it roughly throws a human shape on there, human shape roughly has a head, when there is a torso two arms should come out (roughly) and on the end of those two arms is something too, but what that is is complicated and always looks different. It's all approximation, extremely well done, but in the end the AI is just guessing where to put something.

If you trained a model on just a single type of hand and finger position it would perfectly replicate it. But every hand is different and each hand has a near unlimited amount of positions it can be in (including each finger). So it's usually a mess.

I saw one way to get better results, but that's pretty much giving the AI beforehand a pose (like a stick figure) so it already knows where things should go. If you just freely generate "Human male, holding hands up" you probably get a mess with 6 fingers out and maybe a third arm going to nowhere in the back.

[-] ksynwa@lemmygrad.ml 4 points 1 year ago

Why are humans so bad with drawing hands?

The rest of your answer makes sense but this rhetorical question is not helpful IMO. There are lots of things that humans are not good at but at which computers excel.

[-] Vlyn@lemmy.zip 5 points 1 year ago

That's mostly true, but not fully. Models use human drawn images and photos to learn from. So if you put in millions of drawn images and the hands aren't perfect in all of them, you might mess up the model too. That's why negative prompts like "malformed", "bad quality", "misformed hands" and so on are popular when playing with image generation.

[-] simple@lemm.ee 10 points 1 year ago

Hands are really complicated, even to draw. Everything else is relatively easy to guess for an AI, usually faces are looking at the camera or looking sideways, but hands have like a thousand different positions and poses. It's hard for the AI to guess what the hands should look like and where the fingers should be. It doesn't help that people are historically bad at drawing hands so there's a lot of garbage in the data.

[-] ksynwa@lemmygrad.ml 1 points 1 year ago

That's true but I would have thought that the models would be able to "understand" hands because I'm assuming they have seen millions of photographs with hands in them by now.

[-] queermunist@lemmy.ml 1 points 1 year ago* (last edited 1 year ago)

I think it's helpful to remember that the model doesn't have a skeleton, its literally skin deep. It doesn't understand hands, it understands pixels. Without an understanding of the actual structure all the AI can do is guess where the pixels go based on other neighboring pixels.

[-] SheeEttin@lemmy.world 1 points 1 year ago

Sure, and if they were illustrative of hands, you'd get good hands for output. But they're random photos from random angles, possibly only showing a few fingers. Or maybe with hands clasped. Or worse, two people holding hands. If you throw all of those into the mix and call them all hands, a mix is what you're going to get out.

Look at this picture: https://petapixel.com/assets/uploads/2023/03/SD1131497946_two_hands_clasped_together-copy.jpg

You can sort of see where it's coming from. Some parts look like a handshake, some parts look like two people standing side by side holding hands (both with and without fingers interlaced), some parts look like one person's hands on their knee. It all depends on how you're constructing the image, and what your input data and labeling is.

Stable Diffusion works by changing individual pixels until it looks reasonable enough, not looking at the macro scale of the whole image. Other methods, like whatever dalle2 uses, seem to work better.

[-] silvercove@lemdro.id 3 points 1 year ago

Probably because it's a complicated 3D shape. The 2D projection of the hand on the photo can change a lot depending on the camera angle, position of the hand and what the person is doing.

Also I noticed that AI has difficulty when different features are close to one-another, for example when someone crosses legs or holds and object. Maybe the AI is competent to draw the objects in isolation, but their combination is much more difficult. This is often the case with hands,

[-] trachemys@lemmy.world 1 points 1 year ago

Why can’t Deadpool comic artist Rob Liefeld draw feet?

this post was submitted on 02 Sep 2023
10 points (85.7% liked)

Technology

34728 readers
79 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS