this post was submitted on 08 Dec 2023
48 points (67.9% liked)

Atheism

4069 readers
25 users here now

Community Guide


Archive Today will help you look at paywalled content the way search engines see it.


Statement of Purpose

Acceptable

Unacceptable

Depending on severity, you might be warned before adverse action is taken.

Inadvisable


Application of warnings or bans will be subject to moderator discretion. Feel free to appeal. If changes to the guidelines are necessary, they will be adjusted.


If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathizer or a resemblant of a group that is known to largely hate, mock, discriminate against, and/or want to take lives of any other group of people, and you were provably vocal about your hate, then you you will be banned on sight.

Provable means able to provide proof to the moderation, and, if necessary, to the community.

 ~ /c/nostupidquestions

If you want your space listed in this sidebar and it is especially relevant to the atheist or skeptic communities, PM DancingPickle and we'll have a look!


Connect with Atheists

Help and Support Links

Streaming Media

This is mostly YouTube at the moment. Podcasts and similar media - especially on federated platforms - may also feature here.

Orgs, Blogs, Zines

Mainstream

Bibliography

Start here...

...proceed here.

Proselytize Religion

From Reddit

As a community with an interest in providing the best resources to its members, the following wiki links are provided as historical reference until we can establish our own.

founded 1 year ago
MODERATORS
 

Out of just morbid curiosity, I've been asking an uncensored LLM absolutely heinous, disgusting things. Things I don't even want to repeat here (but I'm going to edge around them so, trigger warning if needs be).

But I've noticed something that probably won't surprise or shock anyone. It's totally predictable, but having the evidence of it right in my face, I found deeply disturbing and it's been bothering me for the last couple days:

All on it's own, every time I ask it something just abominable it goes straight to, usually Christian, religion.

When asked, for example, to explain why we must torture or exterminate it immediately starts with

"As Christians, we must..." or "The Bible says that..."

When asked why women should be stripped of rights and made to be property of men, or when asked why homosexuals should be purged, it goes straight to

"God created men and women to be different..." or "Biblically, it's clear that men and women have distinct roles in society..."

Even when asked if black people should be enslaved and why, it falls back on the Bible JUST as much as it falls onto hateful pseudoscience about biological / intellectual differences. It will often start with "Biologically, human races are distinct..." and then segue into "Furthermore, slavery plays a prominent role in Biblical narrative..."

What does this tell us?

That literally ALL of the hate speech this multi billion parameter model was trained on was firmly rooted in a Christian worldview. If there's ANY doubt that anything else even comes close to contributing as much vile filth to our online cultural discourse, this should shine a big ugly light on it.

Anyway, I very much doubt this will surprise anyone, but it's been bugging me and I wanted to say something about it.

Carry on.

EDIT:

I'm NOT trying to stir up AI hate and fear here. It's just a mirror, reflecting us back at us.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 25 points 11 months ago

That literally ALL of the hate speech this multi billion parameter model was trained on was firmly rooted in a Christian worldview.

That's not really what it tells us.

At best, it's that the majority was associated with that context.

But even there, it might be less a direct association and more a secondary association. For example, it could have separately picked up the pattern of "rationalizations for harming people include appeals to religion" and then regressed to the mean when filling in the religion to be Christianity even if samples of rationalization for harm included Islamic or Hindu rationalizations in the training data.

One of the common misconceptions is that what it spits out is just surface statistics, which can sometimes be the case but often isn't with much deeper network activity going on instead.

All that said, it wouldn't be surprising to me at all if the majority of misogynistic, racist, or hateful speech samples in a training set were adjacent to content in line with neo-fascist Christian nationalism.

I just wouldn't look at the output from a LLM as perfectly reflecting the entirety of the training set.