I have used several different generators. What they all seem to have in common is that they don't always display what I am asking for. Example: if I am looking for a person in jeans and t-shirt, I will get images of a person wear things totally different clothing and it isn't consistent. Another example is if I want a full body picture, that command seems to be ignored giving just waist up or just below the waist. Same goes if I ask for side views or back views. Sometimes they work. Sometimes they don't. More often they don't. I have also seen that none of the negative requests seem to actually work. If I ask for pictures of people and don't want them using cell phones or no tattoos, like magic they have cell phones. Some have tattoos. I have noticed this in every single generator I have used. Am I asking for things the wrong way or is the AI doing whatever it wants and not paying attention to my actual request?
Thanks
Can you give an example of a complete prompt? Are you using Dall-E, Midjourney, Stable Diffusion…?
It seems that all models need to have prompts crafted specifically for them and you need to follow-up with corrections. The follow-up is critical for pretty much anything these LMMs output.
Image-to-image also helps a lot with SD. Even some roughly-drawn blobs can be the difference between the image almost matching what you had in mind vs. looking exactly how you intended.
I just cant get img2img on SD to work for me to get images that are what I want(A1111 front end)