this post was submitted on 23 Aug 2024
195 points (100.0% liked)

TechTakes

1437 readers
62 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 3 months ago (1 children)

You're dodging the question. How do you evaluate if it's good at predicting words? How do you evaluate if a change made it better or worse?

[–] MagicShel 1 points 3 months ago* (last edited 3 months ago)

So the censorship turns into how ChatGPT makes everything sound like a lecture from HR. That makes it less useful at predicting text in non-corporate settings.

One of the things I've built is a discord bot for running roleplaying games. It's pretty good at text, but when you try to have it play an evil character or narrate combat, it becomes very very difficult. The output is worse than without the censorship because in the context of a monologuing bad guy, he's not going to make a point of respecting the feelings of others.

There are apparently tools for analyzing the output and ranking the quality, but that's above my pay grade. I'm just going off of very clear personal experience.

Sorry I though at first this was a continuation of another thread so it's a little out of context, but maybe it answers the gist.