this post was submitted on 21 Jun 2023
10 points (100.0% liked)

Actually Useful AI

2010 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Quote:

In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.

Related:

top 2 comments
sorted by: hot top controversial new old
[โ€“] sisyphean 2 points 1 year ago* (last edited 1 year ago)

Has anyone else tried these models? I find them very impressive. Here is a completion I got from the 1M one (prompt in bold):

Once upon a time, there was a little girl called Anne. She was three years old and loved to play outside. One day, Anne was playing in the garden when she saw a big, shiny object. She wanted to pick it up, but it was too high up.

This is surprisingly coherent coming from a model with only 1 million parameters (GPT-3.5 has 175 billion). Unfortunately, I couldn't generate more text after this ("No text was generated"). I'm not really familiar with Hugging Face or how these models work but it would be interesting to experiment with it more.

[โ€“] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I've had a play with these models and the dataset.

  1. They're under-trained, you can squeeze about 10% more performance out of them.
  2. They're trained on the GPT3.5 generated dataset, and there's a GPT4 generated dataset available on Huggingface
  3. The GPT4 dataset (I haven't looked at the GPT3.5 dataset) has random bad Unicode, misspellings, missing spaces, etc
  4. Because of 3, the tokenization isn't great

Given all that, retraining on a cleaned dataset may give even more impressive results.

load more comments
view more: next โ€บ