this post was submitted on 30 Sep 2023
1093 points (98.8% liked)

Open Source

31277 readers
161 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 1 year ago (4 children)

As much as I love Mozilla, I know they're going to censor it (sorry, the word is "alignment" now) the hell out of it to fit their perceived values. Luckily if it's open source then people will be able to train uncensored models

[–] [email protected] 72 points 1 year ago (4 children)

What in the world would an "uncensored" model even imply? And give me a break, private platforms choosing to not platform something/someone isn't "censorship", you don't have a right to another's platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

[–] [email protected] 40 points 1 year ago (4 children)

This is something I think a lot of people don't get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There's a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

Talking about an "uncensored" LLM basically just comes down to saying you'd like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you're actively trying to produce a model to do illegal or unethical things I don't quite see the point of contention or what "censorship" could actually mean in this context.

[–] [email protected] 16 points 1 year ago

It means they can’t make porn images of celebs or anime waifus, usually.

[–] [email protected] 3 points 1 year ago

That's not at all how a uncensored LLM is. That sounds like an untrained model. Have you actually tried an uncensored model? It's the same thing as regular, but it doesn't attempt to block itself for saying stupid stuff, like "I cannot generate a scenario where Obama and Jesus battle because that would be deemed offensive to cultures". It's literally just removing the safeguard.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

It's a machine, it should do what the human tells it to. A machine has no business telling me what I can and cannot do.

[–] [email protected] 1 points 1 year ago (1 children)

I'm from your camp but noticed I used ChatGPT and the like less and less over the past months. I feel they became less and less useful and more generic. In Februar or March, they were my go to tools for many tasks. I reverted back to old-fashioned search engines and other methods, because it just became too tedious to dance around the ethics landmines, to ignore the verbose disclaimers, to convince the model my request is a legit use case. Also the error ratio went up by a lot. It may be a tame lapdog, but it also lacks bite now.

[–] [email protected] 1 points 1 year ago

I've found a very simple expedient to avoid any such issues is just to not use things like ChatGPT in the first place. While they're an interesting gadget, I have been extremely critical of the massive over-hyped pitches of how useful LLMs actually are in practice, and have regarded them with the same scrutiny and distrust as people trying to sell me expensive monkey pictures during the crypto boom. Just as I came out better of because I didn't add NFTs to my financial assets during the crypto boom, I suspect that not integrating ChatGPT or its competitors into my workflow now will end up being a solid bet, given that the current landscape of LLM based tools is pretty much exclusively a corporate dominated minefield surrounded by countless dubious ethics points and doubts on what these tools are even ultimately good for.

[–] [email protected] 21 points 1 year ago

I fooled around with some uncensored LLaMA models, and to be honest if you try to hold a conversation with most of them they tend to get cranky after a while - especially when they hallucinate a lie and you point it out or question it.

I will never forget when one of the models tried to convince me that photosynthesis wasn't real, and started getting all snappy when I said I wasn't accepting that answer 😂

Most of the censorship "fine tuning" data that I've seen (for LoRA models anyway) appears to be mainly scientific data, instructional data, and conversation excerpts

[–] [email protected] 17 points 1 year ago (1 children)

There's a ton of stuff ChatGPT won't answer, which is supremely annoying.

I've tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn't an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

Sarcasm is, for the most part, very difficult to do... If ChatGPT thinks what you're trying to write is mean-spirited, it just won't do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it's fine, and often unintentionally very funny.

There's plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I'm running Wizard 30B uncensored locally, and ChatGPT for everything else. I'd like to think I'm not a weirdo, I just like D&d... a lot, lol... and even with my use case I'm bumping my head on some of the censorship issues with LLMs.

[–] [email protected] 2 points 1 year ago (1 children)

Interesting, may I ask you a question regarding uncensored local / censored hosted LLMs in comparison?

There is this idea censorship is required to some degree to generate more useful output. In a sense, we somehow have to tell the model which output we appreciate and which we don't, so that it can develop a bias to produce more of the appreciated stuff.

In this sense, an uncensored model would be no better than a million monkeys on typewriters. Do we differentiate between technically necessary bias, and political agenda, is that possible? Do uncensored models produce more nonsense?

[–] [email protected] 2 points 1 year ago

That's a good question. Apparently, these large data companies start with their own unaligned dataset and then introduce bias through training their model after. The censorship we're talking about isn't necessarily trimming good input vs. bad input data, but rather "alignment" which is intentionally introduced after.

Eric Hartford, the man who created Wizard (the LLM I use for uncensored work), wrote a blog post about how he was able to unalign LLAMA over here: https://erichartford.com/uncensored-models

You probably could trim input data to censor output down the line, but I'm assuming that data companies don't because it's less useful in a general sense and probably more laborious.

[–] [email protected] 6 points 1 year ago

As an aside I'm in corporate. I love how gung ho we are on AI meanwhile there are lawsuits and potential lawsuits and investigative journalism coming out on all the shady shit AI and their companies are doing. Meanwhile you know the SMT ain't dumb they know about all this shit and we are still driving forward.

[–] [email protected] 3 points 1 year ago (2 children)

If 'censored' means that underpaid workers in developing countries don't need to sift through millions of images of gore, violence, etc, then I'm for it

[–] [email protected] 6 points 1 year ago (1 children)
[–] [email protected] 2 points 1 year ago (1 children)

A LLM-based system cannot produce results that it hasn't explicitly been trained on, and even making its best approximation with given data will never give results based on the real thing. That, and most of the crap that LLMs """censor""" are legal self-defense

[–] [email protected] 1 points 1 year ago

This is 100% how I feel about it

[–] [email protected] 0 points 1 year ago

that's how the censoring happens.