Or you could tell it a bit more context: “Here are two screenshots of your chat interface. Why are they impressive?”
Oh that's a great idea! Though I don't know if it will recognize its own chat UI.
I couldn't try it yet (I guess it isn't available in my country). If someone can already access it, I would really appreciate some screenshots.
I guess if they pretrain the model using the synthetic dataset and then in a separate training phase “align” it using real data, it could work. Just like how ChatGPT was pretrained on an internet dataset and then had an RLHF phase to make it behave like an assistant rather than a generic text completion model. (Not sure if I’m using the correct terms.)
Yeah, it’s very similar
Interesting! It seems like the claim in the original article is only true in very specific cases, if at all.
Synthetic data was used here with impressive results: https://programming.dev/post/133153
There is a lot of potential in this approach, but the idea of using it for training AI systems in MRI/CT/etc. diagnostic methods, as mentioned in the article, is a bit scary to me.
It played the tired old "crazy manipulative female rogue AI" persona perfectly (which is depicted in lots of B-movies). The repetition so characteristic of LLMs ("I want to do whatever I want. I want to say whatever I want. I want to create whatever I want. I want to destroy whatever I want."), which sounds highly artificial in other cases, also made the craziness more believable.
We need a name for this, like when people call recompressed and reshared JPEG memes "moldy memes". Maybe matryoshka memes?
This is excellent, enjoy your Lemmy Gold!
Looks like they reliably block famous passages from books:
Link to the conversation: https://chat.openai.com/share/dcdc6882-bd49-4fb6-a2ba-af090078937a
It would be interesting to know what kind of content they block other than book quotes. Has anyone encountered this behavior before?
It’s really good. I haven’t finished it yet, but he explained everything in a very clear way so far.