this post was submitted on 23 Jul 2023
11 points (100.0% liked)
Stable Diffusion
1487 readers
1 users here now
Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.
Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Locally with automatic1111, I'd say 10GB VRAM as a starting point to have a good experience (can do up to 1024x768 in a single go or a little less when ControlNet is involved). Though one can never have enough VRAM, when buying a new card I'd aim for 16GB VRAM to have some spare room for future models. Image generation takes about ~30sec. Upscaling is possible, but can take quite a while and results can be a bit hit and miss.
The plain StableDiffusion model is largely useless these days, go over to https://civitai.com/ and download something custom trained, they give far better results. ControlNet is another absolute must have and gives a lot of control over the resulting image (pose, 3d shape, sketch), along with a far superior inpainting, compared to img2img.
I have been playing around with SD for about half a year and it still blows my mind what kind of results you can get with rather minuscule effort. Though worth mentioning that the results are largely dictated by the AI, trying to get very specific results can be an absolute nightmare.