this post was submitted on 15 Jun 2024
89 points (100.0% liked)

SneerClub

983 readers
38 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

all 26 comments
sorted by: hot top controversial new old
[–] [email protected] 38 points 5 months ago (2 children)

Carl T. Bergstrom, 13 February 2023:

Meta. OpenAI. Google.

Your AI chatbot is not hallucinating.

It's bullshitting.

It's bullshitting, because that's what you designed it to do. You designed it to generate seemingly authoritative text "with a blatant disregard for truth and logical coherence," i.e., to bullshit.

Me, 2 February 2023:

I confess myself a bit baffled by people who act like "how to interact with ChatGPT" is a useful classroom skill. It's not a word processor or a spreadsheet; it doesn't have documented, well-defined, reproducible behaviors. No, it's not remotely analogous to a calculator. Calculators are built to be right, not to sound convincing. It's a bullshit fountain. Stop acting like you're a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is not to swim in it.

[–] [email protected] 9 points 5 months ago

Someone (maybe on Sneerclub?) once made the point that Hitler also produced the occasional bad art piece and extreme quantities of bullshit.

[–] [email protected] 29 points 5 months ago (3 children)

Control the language and you control the thought. I pitched a fit when "hallucinate" was put forward by the tech giants to describe their LLMs' falsehoods, and it mostly fell on deaf ears in my circles. Hallucinating isn't what these things do. They bullshit.

[–] [email protected] 19 points 5 months ago

Hallucination also hid that literally everything they produce is a 'hallucination' because that's how they work. "Bullshit" is much more apt, as a bullshitter is sometimes and even often right.

[–] [email protected] 15 points 5 months ago (1 children)

The use of anthropomorphic language to describe LLMs is infuriating. I don't even think bullshit is a good term, because among other things it implies intent or agency. Maybe the LLM produces something that you could call bullshit, but to bulshit is a human thing and I'd argue that only reason that what the LLM is producing can be called bullshit is because there's a person involved in the process.

Probably better to think about it in terms of lossy compression. Even if that's not quite right, it's less inaccurate and it doesn't obfuscate difference between what the person brings to the table and what the LLM is actually doing.

[–] [email protected] 6 points 5 months ago (1 children)

“confabulate” is, imo, the closest we have (i don't remember who originally used this analogy, unfortunately)

[–] [email protected] 10 points 5 months ago

@mawhrin To misuse an old engineering joke, is an LLM a Turbo Confabulator?

[–] [email protected] 20 points 5 months ago (1 children)

We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

Bullshit is a far better description for sure.

[–] [email protected] 17 points 5 months ago

Yes, hallucinations suggests a mind which can hallucinate.

Bullshit machine is more apt.