this post was submitted on 27 Jan 2025
650 points (97.9% liked)

Technology

35480 readers
375 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 65 points 2 days ago (4 children)

The best part is that it's open source and available for download

[–] [email protected] 24 points 2 days ago (5 children)

So can I have a private version of it that doesn't tell everyone about me and my questions?

[–] [email protected] 27 points 2 days ago (1 children)
[–] [email protected] 2 points 1 day ago

Thank you very much. I did ask chatGPT was technical questions about some... subjects... but having something that is private AND can give me all the information I want/need is a godsend.

Goodbye, chatGPT! I barely used you, but that is a good thing.

[–] lambda 4 points 2 days ago

Yep, lookup ollama

[–] [email protected] 3 points 1 day ago (2 children)

Yeah, but you have to run a different model if you want accurate info about China.

[–] [email protected] 2 points 3 hours ago

Unfortunately it's trained on the same US propaganda filled english data as any other LLM and spits those same talking points. The censors are easy to bypass too.

[–] [email protected] 5 points 1 day ago (1 children)

Yeah but China isn't my main concern right now. I got plenty of questions to ask and knowledge to seek and I would rather not be broadcasting that stuff to a bunch of busybody jackasses.

[–] [email protected] -1 points 1 day ago

I agree. I don’t know enough about all the different models, but surely there’s a model that’s not going to tell you “<whoever’s> government is so awesome” when asking about rainfall or some shit.

[–] [email protected] 2 points 2 days ago (2 children)

Can someone with the knowledge please answer this question?

[–] boomzilla 4 points 1 day ago* (last edited 1 day ago)

I watched one video and read 2 pages of text. So take this with a mountain of salt. From that I gathered that deepseek R1 is the model you interact with when you use the app. The complexity of a model is expressed as the number of parameters (though I don't know yet what those are) which dictate its hardware requirements. R1 contains 670 bn Parameter and requires very very beefy server hardware. A video said it would be 10th of GPUs. And it seems you want much of VRAM on you GPU(s) because that's what AI crave. I've also read 1BN parameters require about 2GB of VRAM.

Got a 6 core intel, 1060 6 GB VRAM,16 GB RAM and Endeavour OS as a home server.

I just installed Ollama in about 1/2 an hour, using docker on above machine with no previous experience on neural nets or LLMs apart from chatting with ChatGPT. The installation contains the Open WebUI which seems better than the default you got at ChatGPT. I downloaded the qwen2.5:3bn model (see https://ollama.com/search) which contains 3 bn parameters. I was blown away by the result. It speaks multiple languages (including displaying e.g. hiragana), knows how much fingers a human has, can calculate, can write valid rust-code and explain it and it is much faster than what i get from free ChatGPT.

The WebUI offers a nice feedback form for every answer where you can give hints to the AI via text, 10 score rating thumbs up/down. I don't know how it incooperates that feedback, though. The WebUI seems to support speech-to-text and vice versa. I'm eager to see if this docker setup even offers APIs.

I'll probably won't use the proprietary stuff anytime soon.

[–] [email protected] 8 points 2 days ago (1 children)

Yes, you can run a downgraded version of it on your own pc.

[–] [email protected] 5 points 2 days ago

Apparently phone too! Like 3 cards down was another post linking to instructions on how to run it locally on a phone in a container app or termux. Really interesting. I may try it out in a vm on my server.

[–] [email protected] 8 points 1 day ago (2 children)

I asked it about Tiananmen Square, it told me it can't answer that because it can only respond with "harmless" responses.

[–] [email protected] 24 points 1 day ago (3 children)

Yes the online model has those filters. Some one tried it with one of the downloaded models and it answers just fine

[–] [email protected] 1 points 18 hours ago

You misspelled "lies". Or were you trying to type "psyops tool"??

[–] [email protected] 5 points 1 day ago (1 children)

When running locally, it works just fine without filters

[–] [email protected] 1 points 18 hours ago

I tried the smaller models and it's not fine. It's hard coded.

[–] [email protected] 2 points 1 day ago (1 children)
[–] [email protected] 2 points 1 day ago

Does the same thing on my local instance.

[–] [email protected] 6 points 1 day ago

Yes but your server can't handle the biggest LLM.