this post was submitted on 09 Oct 2024
10 points (81.2% liked)
Stable Diffusion
4318 readers
10 users here now
Discuss matters related to our favourite AI Art generation technology
Also see
Other communities
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Basically, avoid AMD if you're serious about it. Direct ML just can't compete with cuda. Performance with stable diffusion on Nvidia blows away AMD. There's not only performance issues, but often compatibility issues too.
A 4090 is as fast as it gets for consumer hardware. I've got a 3090, and it's got the same amount of vram as a 4090 (24GB), but no where near as fast. So a 3090/TI would be a good budget option.
However, if you're willing to wait, they're saying Nvidia will be announcing the 5000 series in January. I'm not sure when they'll release though. Plus there's the whole stock problems with a new series launch. But the 5090 is rumored to have 32GB vram.
ROCm is comparable but very few applications work out of the box with it.
I've tried to find comparison data on performance between AMD Vs Nvidia and I see lots of people saying what you're saying, but I can never find numbers. Do you know of any?
If a card is less than half price, maybe I don't mind it's lower performance. It all depends on how much lower.
Also, is the same true under Linux?
Its highly dependent on implementation.
https://www.pugetsystems.com/labs/articles/stable-diffusion-performance-professional-gpus/
The experience on Linux is good (use docker otherwise python is dependency hell) but the basic torch based implementations (automatic, comfy) have bad performance. I have not managed to get shark to run on linux, the project is very windows focused and has no documentation for setup besides "run the installer".
Basically all of the vram trickery in torch is dependent on xformers, which is low-level cuda code and therefore does not work on amd. And has a running project to port it, but it's currently to incomplete to work.
Good to know about CUDA/Direct ML.
I found a couple of 2022 posts recommending 3090s, especially since cryptocoin miners were selling lots of them cheap. Thanks for the heads up about the 5000 release, I suspect it will be above my budget but it will net me better deals on a 4090 :P
DirectML sucks but ROCm is great, but you need to check if the software you want to use works with ROCM. Also note there's only like 4 cards that work with ROCm as well.
Yeah I don't think 4090 is going down in price. As of now, they're more expensive than when they launched and it seems production is ramping down.