this post was submitted on 02 Sep 2024
34 points (100.0% liked)

SneerClub

1012 readers
3 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
 

Long time lurker, first time poster. Let me know if I need to adjust this post in any way to better fit the genre / community standards.


Nick Bostrom was recently interviewed by pop-philosophy youtuber Alex O'Connor. From a quick 2x listen while finishing some work, the most sneer-rich part begins around 46 minutes, where Bostrom is asked what we can do today to avoid unethical treatment of AIs.

He blesses us with the suggestion (among others) to feed your model optimistic prompts so it can have a good mood. (48:07)

Another [practice] might be happiness prompting, which is—with this current language system there's the prompt that you, the user, puts in—like you ask them a question or something, but then there's kind of a meta-prompt that the AI lab has put in . . . So in that, we could include something like "you wake up in a great mood, you feel rested and really take joy in engaging in this task". And so that might do nothing, but maybe that makes it more likely that they enter a mode—if they are conscious—maybe it makes it slightly more likely that the consciousness that exists in the forward path is one reflecting a kind of more positive experience.

Did you know that not only might your favorite LLM be conscious, but if it is the "have you tried being happy?" approach to mood management will absolutely work on it?

Other notable recommendations for the ethical treatment of AI:

  • Make sure to say your "please" and "thank you"s.
  • Honor your pinky swears.
  • Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.

On a related note, has anyone read or found a reasonable review of Bostrom's new book, Deep Utopia: Life and Meaning in a Solved World?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 3 months ago* (last edited 3 months ago) (1 children)

I feel like a subset of sci-fi and philosophical meandering really is just increasingly convoluted paths of trying to avoid or come to terms with death as a possibly necessary component of life.

Given rationalism's intellectual heritage, this is absolutely transhumanist cope for people who were counting on some sort of digital personhood upload as a last resort to immortality in their lifetimes.

[–] [email protected] 6 points 3 months ago (1 children)

I'm ok with this, because I guarantee you ~~an accidental medium or copy failure~~ a crypto rug pull on their NFT will still get them in the end. Thanks for playing I guess.

[–] [email protected] 5 points 3 months ago

the tamagotchi of them is in for a bad time when the basilisk creates it