this post was submitted on 02 Oct 2023
27 points (96.6% liked)
LocalLLaMA
2244 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What can I run on a 1080ti and how does it compare to what's available in general?
On Huggingface is a space where you can select the model and your graphics card and see if you can run it, or how many cards you need to run it. https://huggingface.co/spaces/Vokturz/can-it-run-llm
You should be able to do inference on all 7b or smaller models with quantization.
Wow thank you I'll look into it!