195
Copilot AI calls journalist a child abuser, Microsoft tries to launder responsibility
(pivot-to-ai.com)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
You're dodging the question. How do you evaluate if it's good at predicting words? How do you evaluate if a change made it better or worse?
So the censorship turns into how ChatGPT makes everything sound like a lecture from HR. That makes it less useful at predicting text in non-corporate settings.
One of the things I've built is a discord bot for running roleplaying games. It's pretty good at text, but when you try to have it play an evil character or narrate combat, it becomes very very difficult. The output is worse than without the censorship because in the context of a monologuing bad guy, he's not going to make a point of respecting the feelings of others.
There are apparently tools for analyzing the output and ranking the quality, but that's above my pay grade. I'm just going off of very clear personal experience.
Sorry I though at first this was a continuation of another thread so it's a little out of context, but maybe it answers the gist.