this post was submitted on 02 Aug 2023
19 points (100.0% liked)

Actually Useful AI

1992 readers
1 users here now

Welcome! 🤖

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! 🔔

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? 📝

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? 🚫

General Rules 📜

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities 🌐

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] canpolat 4 points 1 year ago (1 children)

GPT-4 was able to do this even though the training data for the version tested by the authors was entirely text-based. That is, there were no images in its training set. But GPT-4 apparently learned to reason about the shape of a unicorn’s body after training on a huge amount of written text.

It's as if they can in some way or other "see".

[–] silas 3 points 1 year ago

That's absolutely incredible, I don't think the general public understands the effect AI will have on our society in the next 15 years

[–] [email protected] 4 points 1 year ago

This is the best summary I could come up with:


Machine learning researchers had been experimenting with large language models (LLMs) for a few years by that point, but the general public had not been paying close attention and didn’t realize how powerful they had become.

If you know anything about this subject, you’ve probably heard that LLMs are trained to “predict the next word” and that they require huge amounts of text to do this.

Conventional software is created by human programmers, who give computers explicit, step-by-step instructions.

By contrast, ChatGPT is built on a neural network that was trained using billions of words of ordinary language.

Finally, we’ll explain how these models are trained and explore why good performance requires such phenomenally large quantities of data.


I'm a bot and I'm open source!