this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2249 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I use a ton of different ones. I can test specific models if you like.
The good ol' anything v3 and DPM Karras 2m+
that would give me a good baseline. Thanks! :)
Does the resolution or steps or anything else matter?
512x512 and 1024x1024 would be interesting
and 50 steps
That'd be awesome!
I ran these last night, but didn’t have the correct VAE, so I’m not sure if that affects anything. 512x512 was about 7.5it/s. 1024x1024 was about 1.3s/it (iirc). I used somebody else’s prompt which used loras and embeddings, so I’m not sure how that affects things either. I’m not a professional benchmarker so consider these numbers anecdotal at best. Hope that helps.
Edit: formatting
7.5it/s for 512x512 is what i was looking for! On par (actually even faster than my 3070) with NVidia!
Thank you very much! And how / what exactly did you use to install?
The install wasn’t too hard. I mean it wasn’t like just running a batch file on Windows, but if you have even a tiny bit of experience with the Linux shell and installing python apps, you will be good. You mostly just need to make sure you’re using the correct (ROCm) version of PyTorch. Happy to help, any time (best on evenings and weekends EST). Please DM.
i'm quite familiar with Linux and installing stuff - so there is no compiling special versions of some weird packages and manually put them in venv or something i assume😄
Thanks again!
No special compiling. Just need to download the ROCm drivers from AMD and the special ROCm PyTorch version.
Also you’re welcome!