32
this post was submitted on 22 Jan 2025
32 points (100.0% liked)
Technology
61456 readers
3983 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Fuck it, I use local LLMs enough, will give this a crack.
Edit: it’s doing 6 paragraphs in 8.2 seconds, the last model I used was doing like 1 paragraph in 12 seconds. Crazy fast in my experience.
What GPU are you using ? It looks to me like it requires quite a lot of vram
What specs are you running it on?
How are they to run, how useful are they, and any you can recommend?
If you want a really simple way to run a variety of local models with a nice UI take a look at https://jan.ai/
This is cool, are there any decent ones that run in docker and have a web UI?
I’ve been using open webui (search for it with those terms) to run local models in a docker container served from Llama for the last few months and I love it.
Dead simple to run, I use Ollama to run local models and it’s like 3 words to setup from the command line.
Useful is entirely relative. I use mine personally and somewhat professionally, but I only use it to draft text and manually alter it. AI is amazing, but it’s also crap. You gotta work it a bit.
Umm this model from what I can see, I’m using the 8b model and it’s fast to generate, time will tell how good the quality is but I’m impressed after a few minutes play.
8B parameter tag is the distilled llama 3.1 model, which should be great for general writing. 7B is distilled qwen 2.5 math, and 14B is distilled qwen 2.5 (general purpose but good at coding). They have the entire table called out on their huggingface page, which is handy to know which one to use for specific purposes.
The full model is 671B and unfortunately not going to work on most consumer hardwares, so it is still tethered to the cloud for most people.
Also, it being a made in China model, there are some degree of censorship mandated. So depending on use case, this may be a point of consideration, too.
Overall, it’s super cool to see something at this level to be generally available, especially with all the technical details out in the open. Hopefully we’ll see more models with this level of capability become available so there are even more choices and competition.
Personally the part I like is that it's not meta. Unfortunately if 8b is based on llama, there could be meta censorship baked in that we simply don't know about.
Just remember, Ollama's version of 8b models is not the same as the original on Huggingface. There's a reason it's a much smaller file size. That being said my understanding is the quant is good.