this post was submitted on 05 May 2025
436 points (95.8% liked)

Technology

69867 readers
3011 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 4 days ago (1 children)

Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the "truth" and path to enlightenment is hidden within a service of a big tech company?

[–] [email protected] 11 points 4 days ago (2 children)

well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.

load more comments (2 replies)
[–] [email protected] 15 points 4 days ago* (last edited 4 days ago) (2 children)

This is the reason I've deliberately customized GPT with the follow prompts:

  • User expects correction if words or phrases are used incorrectly.

  • Tell it straight—no sugar-coating.

  • Stay skeptical and question things.

  • Keep a forward-thinking mindset.

  • User values deep, rational argumentation.

  • Ensure reasoning is solid and well-supported.

  • User expects brutal honesty.

  • Challenge weak or harmful ideas directly, no holds barred.

  • User prefers directness.

  • Point out flaws and errors immediately, without hesitation.

  • User appreciates when assumptions are challenged.

  • If something lacks support, dig deeper and challenge it.

I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

[–] [email protected] 10 points 4 days ago (22 children)

I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There's these things called newspapers that exist, they aren't like they used to be but there is a choice of which to buy even.

I've no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

load more comments (22 replies)
[–] [email protected] 7 points 4 days ago (9 children)

I'm not saying these prompts won't help, they probably will. But the notion that ChatGPT has any concept of "truth" is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.

load more comments (9 replies)
[–] [email protected] 14 points 4 days ago (25 children)

Our species really isn't smart enough to live, is it?

load more comments (25 replies)
[–] [email protected] 12 points 4 days ago* (last edited 4 days ago) (1 children)

This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.

As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.

At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

“At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.

load more comments (1 replies)
[–] [email protected] 2 points 3 days ago

Sounds like Mrs. Davis.

[–] [email protected] 10 points 4 days ago

Seems like the flat-earthers or sovereign citizens of this century

load more comments
view more: ‹ prev next ›