this post was submitted on 08 Aug 2023
8 points (100.0% liked)

Stable Diffusion

4312 readers
13 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I have been running 1.4, 1.5, 2 without issue - but everytime I try to run SDXL 1.0 (via Invoke, or Auto1111) it will not load the checkpoint.

I have the official hugging face version of the checkpoint, refiner, lora offset and VAE. They are all named properly to match how they need to. They are all in the appropriate folders. When I pick the model to load, it tries for about 20 seconds, then pops a super long error in the python instance and defaults to the last model I loaded. Oddly, it loads the refiner without issue.

Is this a case of my 8gb vram just not being enough? I have tried with the no-half/full precision arguments.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (2 children)

Good point. I watched a Nerdy Rodent video about installing it, and he showed that he used the sdxl_base_vae and sdxl_refiner_vae safetensors, and that is all he copied over. No other files. I went back to the repository and pulled those two file and put them in my checkpoint folder. I reloaded my web user bat file and I got the new checkpoint to load. It took about a minute. I got one image to generate at 1024x1024 but it took about 3 minutes to generate. It looked normal, but I cannot help but think it should be a bit faster than that. But then I noticed my whole machine tanked when running it. It bogged down all 32gb of my ram, and it was showing my gpu was barely doing anything. Maybe there is some kind of memory leak. I may have to check my gpu drivers to see if something is going on.

Are those vae safetensors the only files I need? The tuturial didn't talk about the lora offset or the vae files...so I didn't add them this last time.

[–] [email protected] 1 points 1 year ago

Those safetensors files are all that I have ever used.

For reference, I'm using a 2080 ti. That's got about 11 GB of RAM, I think. I'm not having any freezes whatsoever. I've also tried it on my wife's shiny new 4080. Definitely a speed difference, but again, no freezes or instability. Generating the 1024x1024 images does take forever. I actually went back to 512x512 and stayed there. I can always upscale something that I like.

[–] [email protected] 0 points 1 year ago (1 children)

@Thanks4Nothing @RotaryKeyboard
can you link that video?

ive not managed to get SDXL to work yet and figured i just did not know the right steps.

[–] [email protected] 1 points 1 year ago

I was having a hard time finding it again...turns out it was the AI-trepeneur channel. At first it does seem like he's just going to point everyone towards his patreon, but he does go into the manual process later on in the video.

https://youtu.be/rtUpIY9Opjs