this post was submitted on 04 Feb 2025
41 points (91.8% liked)

Technology

2097 readers
62 users here now

Post articles or questions about technology

founded 2 years ago
MODERATORS
 

There’s an idea floating around that DeepSeek’s well-documented censorship only exists at its application layer but goes away if you run it locally (that means downloading its AI model to your computer).

But DeepSeek’s censorship is baked-in, according to a Wired investigation which found that the model is censored on both the application and training levels.

For example, a locally run version of DeepSeek revealed to Wired thanks to its reasoning feature that it should “avoid mentioning” events like the Cultural Revolution and focus only on the “positive” aspects of the Chinese Communist Party.

A quick check by TechCrunch of a locally run version of DeepSeek available via Groq also showed clear censorship: DeepSeek happily answered a question about the Kent State shootings in the U.S., but replied “I cannot answer” when asked about what happened in Tiananmen Square in 1989.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 day ago (6 children)

At least unlike “Open”AI, it’s open source so you can see and fix its biases.

[–] [email protected] 2 points 18 hours ago (2 children)

No, it's not open source. Only the model weights are open, the datasets and code used to train the model are not.

[–] [email protected] 0 points 8 hours ago* (last edited 8 hours ago)

Pretty sure the code used to train the model is open source? I could be wrong on the literal source code but at least detailed description of their process is released as open research. There is a current effort to reproduce it: https://github.com/huggingface/open-r1

[–] [email protected] 1 points 18 hours ago (1 children)

The guardrails can be removed though, and several models already do, so his point is correct, regardless

[–] [email protected] 1 points 48 minutes ago

you cannot unstir an egg, the guardrails and biases can be finetuned to not be as visible, but the training is ultimately irreversible.

load more comments (3 replies)