this post was submitted on 21 Jun 2023
11 points (92.3% liked)

Actually Useful AI

2014 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Iโ€™ve been following the development of the next Stable Diffusion model, and Iโ€™ve seen this approach mentioned.

Seems like this is a way in which AI training is analogous to human learning - we learn quite a lot from fiction, games, simulations and apply this to the real world. Iโ€™m sure the same pitfalls apply as well.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 2 points 1 year ago (1 children)

Yeah, youโ€™d better have a through way to check if there are any systematic distortions that could have an adverse effect on its operation. I do get the privacy rationale for using synthesized data, though.

[โ€“] sisyphean 1 points 1 year ago* (last edited 1 year ago)

I guess if they pretrain the model using the synthetic dataset and then in a separate training phase โ€œalignโ€ it using real data, it could work. Just like how ChatGPT was pretrained on an internet dataset and then had an RLHF phase to make it behave like an assistant rather than a generic text completion model. (Not sure if Iโ€™m using the correct terms.)