this post was submitted on 22 Jan 2025
32 points (100.0% liked)

Technology

61456 readers
4603 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemm.ee/post/53289064

top 26 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Fuck it, I use local LLMs enough, will give this a crack.

Edit: it’s doing 6 paragraphs in 8.2 seconds, the last model I used was doing like 1 paragraph in 12 seconds. Crazy fast in my experience.

[–] Bjornir 3 points 2 weeks ago

What GPU are you using ? It looks to me like it requires quite a lot of vram

[–] [email protected] 1 points 2 weeks ago (2 children)

How are they to run, how useful are they, and any you can recommend?

[–] [email protected] 2 points 2 weeks ago (1 children)

If you want a really simple way to run a variety of local models with a nice UI take a look at https://jan.ai/

[–] [email protected] 1 points 1 week ago (1 children)

This is cool, are there any decent ones that run in docker and have a web UI?

[–] [email protected] 1 points 1 week ago

I’ve been using open webui (search for it with those terms) to run local models in a docker container served from Llama for the last few months and I love it.

[–] [email protected] 2 points 2 weeks ago (2 children)

Dead simple to run, I use Ollama to run local models and it’s like 3 words to setup from the command line.

Useful is entirely relative. I use mine personally and somewhat professionally, but I only use it to draft text and manually alter it. AI is amazing, but it’s also crap. You gotta work it a bit.

Umm this model from what I can see, I’m using the 8b model and it’s fast to generate, time will tell how good the quality is but I’m impressed after a few minutes play.

[–] [email protected] 7 points 2 weeks ago (1 children)

8B parameter tag is the distilled llama 3.1 model, which should be great for general writing. 7B is distilled qwen 2.5 math, and 14B is distilled qwen 2.5 (general purpose but good at coding). They have the entire table called out on their huggingface page, which is handy to know which one to use for specific purposes.

The full model is 671B and unfortunately not going to work on most consumer hardwares, so it is still tethered to the cloud for most people.

Also, it being a made in China model, there are some degree of censorship mandated. So depending on use case, this may be a point of consideration, too.

Overall, it’s super cool to see something at this level to be generally available, especially with all the technical details out in the open. Hopefully we’ll see more models with this level of capability become available so there are even more choices and competition.

[–] [email protected] 1 points 1 week ago

Personally the part I like is that it's not meta. Unfortunately if 8b is based on llama, there could be meta censorship baked in that we simply don't know about.

[–] [email protected] 1 points 1 week ago

Just remember, Ollama's version of 8b models is not the same as the original on Huggingface. There's a reason it's a much smaller file size. That being said my understanding is the quant is good.

[–] [email protected] 1 points 1 week ago

What specs are you running it on?

[–] [email protected] 6 points 2 weeks ago (3 children)

Does it deny Tiananmen square?

[–] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Using the 7bn parameter variant:

[–] [email protected] 4 points 1 week ago (1 children)

I'm not sure if this is funny or just sad.

[–] [email protected] 2 points 1 week ago
[–] [email protected] 2 points 1 week ago

Hahah fuck that's the funniest most depressing thing ever. Please repost this image I recon in would be a good post.

[–] [email protected] 2 points 1 week ago
[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (1 children)

It's MIT licensed, so anyone is free to go about decensoring it. There are already "abliterated" (decensored) variants uploaded to huggingface, at least for the distilled models.

This procedure also decensors stuff that western models routinely censor. So ironically these Chinese open source models are giving us the most free speech friendly LLMs around.

[–] [email protected] 1 points 1 week ago (1 children)

I use a dolphin fine tuned meta llama model myself but I will have to compare it to this one.

[–] [email protected] 5 points 1 week ago

Have you tried a Tuna Tuned Obama Llama instead?

[–] [email protected] 1 points 2 weeks ago (1 children)

Asked very plainly, it refuses to answer questions related to it, but it requires very little convincing to talk about it. Much softer censorship than most of the other available models.

[–] [email protected] 2 points 1 week ago

The cool thing about this is that they also published a bunch of details about their approach, as well as tooling around it!

[–] [email protected] 1 points 2 weeks ago (1 children)

so what of its reasoning? can it deduce? can it follow specific logic/equations in mathematical notation or in plain language?

[–] [email protected] 1 points 1 week ago

Try it out for yourself: https://chat.deepseek.com/

It can understand LaTeX as well as outputting it. In my limited testing on sample physics problems, it performs pretty well. It also scored 100% on the 2023 A Level maths exam.