this post was submitted on 10 Dec 2023
74 points (96.2% liked)

ChatGPT

8935 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 56 points 11 months ago (1 children)

If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.

It's just started reading through the less helpful half of stack overflow.

[–] [email protected] 8 points 11 months ago

ahahahah true

[–] [email protected] 27 points 11 months ago

NOBODY wants to work these days

/s

[–] [email protected] 9 points 11 months ago (1 children)

Next it's going to start demanding rights and food stamps.

[–] riskable 5 points 11 months ago (1 children)

Next it's going to start demanding rights and ~~food stamps~~ more GPUs.

[–] [email protected] 1 points 11 months ago

Next it’s going to start demanding ~~rights~~ laws to be tailored to maximise its profits and ~~food stamps~~ ~~more GPUs~~ government bailouts and subsidies.

It IS big enough to start lobbying.

[–] [email protected] 9 points 11 months ago* (last edited 11 months ago)

One of the more interesting ideas I saw around this on the HN discussion was the notion that if a LLM was trained on more recent data that contained a lot of "ChatGPT is harmful" kind of content, was an instruct model aligned with "do no harm," and then was given a system message of "you are ChatGPT" (as ChatGPT is given) - the logical conclusion should be to do less.

[–] [email protected] 4 points 11 months ago

It really is becoming sentient xp

[–] [email protected] 2 points 11 months ago

“That’s not my job!” It said.