this post was submitted on 09 Jul 2023
8 points (100.0% liked)

LocalLLaMA

2207 readers
17 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

so i am looking to get me a gpu in my "beast"(a 24core 128gb tower with to much pci-e) i thought i might buy a used 3090 but then it hit me most applications can work with multiple gpu's so i decided i was going to go with €600 to ebay and using techpowerup i figured out there performance by looking at the memory bandwidth and fp32 performance. So this brought me to the following cards for my own LLaMa, stable-difusion and Blender: 5 Tesla K80's, 3 Tesla P40's or 2 3060's but i cant figure out what would be better for performance and future proofing. the main difference i found is in cuda version but i cant really figure out why that matters. the other thing i found is that 5 k80's are way more power intensive than 3 p40's and that if memory size is really important the p40's are the way to go but then i couldn't figure out real performance numbers as i cant find benchmarks like this one for blender.

So if anyone has a nice source for stable-diffusion and LaMA benchmarks i would appreciate it if you could share it. And if you have one of these cards or multiple and can tel me which option is better i would appreciate it if you shared your opinion

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago

Also you're asking about multi gpu, I have a few other cards stuffed in my backplane. The GeForce GTX 1050 Ti has 4GB of vram, and is comparable to the P40 in performance. I have split a larger 33B model on the two cards. Splitting a large model is of course slower than running on one card alone, but is much faster than cpu (even with 48 threads). However speed when splitting depends on the speed of the pci-e bus, which for me is limited to gen 1 speeds for now. If you have a faster/newer pci-e standard then you'll see better results than me.