this post was submitted on 23 Jun 2023
24 points (100.0% liked)

Actually Useful AI

2011 readers
7 users here now

Welcome! 🤖

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! 🔔

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? 📝

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? 🚫

General Rules 📜

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities 🌐

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Link to original tweet:

https://twitter.com/sayashk/status/1671576723580936193?s=46&t=OEG0fcSTxko2ppiL47BW1Q

Screenshot:

Transcript:

I'd heard that GPT-4's image analysis feature wasn't available to the public because it could be used to break Captcha.

Turns out it's true: The new Bing can break captcha, despite saying it won't: (image)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago (3 children)

I love when it tells you it can't do something and then does it anyway.

[–] sisyphean 5 points 1 year ago* (last edited 1 year ago) (2 children)

Or when it tells you that it can do something it actually can't, and it hallucinates like crazy. In the early days of ChatGPT I asked it to summarize an article at a link, and it gave me a very believable but completely false summary based on the words in the URL.

This was the first time I saw wild hallucination. It was astounding.

[–] Phoenix 2 points 1 year ago (1 children)

It's even better when you ask it to write code for you, it generates a decent looking block, but upon closer inspection it imports a nonexistent library that just happens to do exactly what you were looking for.

That's the best sort of hallucination, because it gets your hopes up.

[–] sisyphean 1 points 1 year ago

Yes, for a moment you think “oh, there’s such a convenient API for this” and then you realize…

But we programmers can at least compile/run the code and find out if it’s wrong (most of the time). It is much harder in other fields.