this post was submitted on 17 Jan 2024
44 points (95.8% liked)
Free Open-Source Artificial Intelligence
2886 readers
2 users here now
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
More AI Communities
LLM Leaderboards
Developer Resources
GitHub Projects
FOSAI Time Capsule
- The Internet is Healing
- General Resources
- FOSAI Welcome Message
- FOSAI Crash Course
- FOSAI Nexus Resource Hub
- FOSAI LLM Guide
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This one is only 7B parameters, but it punches far above its weight for such a little model:
https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
My personal setup is capable of running larger models, but for everyday use like summarization and brainstorming, I find myself coming back to Starling the most. Since it's so small, it runs inference blazing fast on my hardware. I don't rely on it for writing code. Deepseek-Coder-33B is my pick for that.
Others have said Starling's overall performance rivals LLaMA 70B. YMMV.
What sort of tokens per second are you seeing with your hardware? Mind sharing some notes on what you're running there? Super curious!