this post was submitted on 25 Jul 2023
104 points (100.0% liked)
Technology
37692 readers
323 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm running it in GPT4All (CPU-based) with 64GB of RAM, and it runs pretty well. I'm not sure what you'd need if you were running it on GPU instead.
WizardLM 30B at 4 bits with the GGML version on Oobabooga runs almost as fast as Llama2 7B on just the GPU. I set it up with 10 threads on the CPU and ~20 layers on the GPU. That leaves plenty of room for a 4096 context with a batch size of 2048. I can even run a 2GB Stable Diffusion model at the same time with my 3080's 16GBV.
Have you tried any of the larger models? I just ordered 64GB of ram. I also got kobold mostly working. I hope to use it to try Falcon 40. I really want to try a 70B model at 2-4 bit and see how its accuracy is.