this post was submitted on 12 Jun 2024
177 points (91.2% liked)

Technology

34877 readers
49 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

It's time to call a spade a spade. ChatGPT isn't just hallucinating. It's a bullshit machine.

From TFA (thanks @mxtiffanyleigh for sharing):

"Bullshit is 'any utterance produced where a speaker has indifference towards the truth of the utterance'. That explanation, in turn, is divided into two "species": hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.

"ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users') agenda."

https://futurism.com/the-byte/researchers-ai-chatgpt-hallucinations-terminology

@technology #technology #chatGPT #LLM #LargeLanguageModels

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 12 points 5 months ago (1 children)

GPT 4 can lie to reach a goal or serve an agenda.

I doubt most of its hallucinated outputs are deliberate, but it can choose to use deception as a logical step.

[โ€“] [email protected] 4 points 5 months ago* (last edited 5 months ago)

Ehh, I mean, it's not really surprising it knows how to lie and will do so when asked to lie to someone as in this example (it was prompted not to reveal that it is a robot). It can see lies in its training data, after all. This is no more surprising than "GPT can write code."

I don't think GPT4 is skynet material. But maybe GPT7 will be, with the right direction. Slim possibility but it's a real concern.