this post was submitted on 27 Jun 2023
43 points (100.0% liked)

Stable Diffusion

4290 readers
37 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.intai.tech/post/25821

u/Alphyn

The workflow is quite simple. Just load a pic into img2img. Use the same size as the original image, enable the tiles controlnet. Set a high denoise ratio. Run it, maybe feed it back and run it a couple of times more. Then enable ultimate SD upscale, set the ratio to 2x and run it again. Then accidentally run it again. Naturally, you put the result of each run back into img2img and update the picture size. The model is RPGArtistTools3.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago

I have two ways to do that:

  • I bought few dolls specifically for that. I just take shots with my camera and use them in controlnet.
  • If you lean the basics of blender, there are posable models available which will generate the skeleton, and depth map for hands and feet automatically. The advantage is that those are not reconstructed from the image, but exact, which avoids errors of misclassification.