I love when it tells you it can't do something and then does it anyway.
Actually Useful AI
Welcome! 🤖
Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.
Be an active member! 🔔
We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.
What can I post? 📝
In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.
What is not allowed? 🚫
- 🔊 Sensationalism: "How I made $1000 in 30 minutes using ChatGPT - the answer will surprise you!"
- ♻️ Recycled Content: "Ultimate ChatGPT Prompting Guide" that is the 10,000th variation on "As a (role), explain (thing) in (style)"
- 🚮 Blogspam: Anything the mods consider crypto/AI bro success porn sigma grindset blogspam
General Rules 📜
Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.
While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.
Related Communities 🌐
General
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Chat
Image
Open Source
Please message @[email protected] if you would like us to add a community to this list.
Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient
Or when it tells you that it can do something it actually can't, and it hallucinates like crazy. In the early days of ChatGPT I asked it to summarize an article at a link, and it gave me a very believable but completely false summary based on the words in the URL.
This was the first time I saw wild hallucination. It was astounding.
It's even better when you ask it to write code for you, it generates a decent looking block, but upon closer inspection it imports a nonexistent library that just happens to do exactly what you were looking for.
That's the best sort of hallucination, because it gets your hopes up.
Yes, for a moment you think “oh, there’s such a convenient API for this” and then you realize…
But we programmers can at least compile/run the code and find out if it’s wrong (most of the time). It is much harder in other fields.
I've not played with it much but does it always describe the image first like that? I've been trying to think about how the image input actually works, my personal suspicion is that it uses an off the shelf visual understanding network(think reverse stable diffusion) to generate a description, then just uses GPT normally to complete the response. This could explain the disconnect here where it cant erase what the visual model wrote, but that could all fall apart if it doesn't always follow this pattern. Just thinking out loud here
Unfortunately I don’t yet have access to it so I can’t check if the description always comes first. But your theory sounds interesting, I hope we’ll be able to find out more soon.
They need to make captchas better or implement PoW. Telling your ai to not solve captchas is stupid and makes it dumber in unrelated tasks just like all the other attempts at censoring these models.
Does one need the app to upload an image? I use it in a web browser and don't see any option to upload an image.
It’s still in preview, I can’t access it either, only a select few on Twitter who post about it all the time and make me jealous:)