this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2249 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Fair enough but if your baseline for comparison is wrong then you can't make good assessments of the capabilities of different GPUs. And it's possible that you don't actually need a new GPU/more VRAM anyway, if your goal is to generate 1024x1024 in Stable Diffusion and run a 13B LLM both of which I can do with 8 GB of VRAM.
This is correct, yes. But I want a new GPU because I want to get away from NVidia...
i CAN use 13b models and I can create 1024x1024 but not without issues, not without making sure nothing else uses VRAM and I run out of memory quite often.
I want to make it more stable. And open the door to use bigger models or make bigger images
Yes, that makes more sense. I was concerned initially that you were looking to buy a new GPU with more VRAM for the sole reason of being unable to do something that you should already be able to do, and that this would be an unnecessary spend of money and/or not actually fix the problem, that you would be somewhat mad at yourself if you found out afterwards that "oh, I just needed to change this setting".
thanks for the concern but no worries, i did my fair share of optimization for my config and i believe i got everything out of it... i will 100% switch to AMD so my question basically just aims at: Can i sell my 3070 or do i have to keep it and put into a "server" on which i can run StableDiffusion and oobabooga because AMD is still too wonky for that...
That's all. My decision is not depending on whether this AI stuff works, but it just accelerates it if AMD can run this, because i can sell my old card to get the money quicker.