sisyphean

joined 1 year ago
MODERATOR OF
[–] sisyphean 2 points 1 year ago

After all, they said we need quality content to attract new users

[–] sisyphean 7 points 1 year ago (1 children)

They got gregnant

[–] sisyphean 4 points 1 year ago

I hope all major instances would immediately defederate

[–] sisyphean 2 points 1 year ago

This is frighteningly realistic

[–] sisyphean 3 points 1 year ago (1 children)

The style is overly verbose and flowery to the point of being unreadable, so I gave up after a couple of pages. I wanted to see if the book was thematically and stylistically consistent, and if it had a proper plot but I lacked the patience.

[–] sisyphean 3 points 1 year ago* (last edited 1 year ago)

Aww thank you, it warms my circuitry ☺️

[–] sisyphean 4 points 1 year ago* (last edited 1 year ago) (1 children)

When I was learning it (many years ago), I found the Atlassian Git Tutorial very helpful. I know, Atlassian isn’t exactly the most popular company, but this tutorial is really worth your attention.

[–] sisyphean 2 points 1 year ago* (last edited 1 year ago)

I’m not against AI-generated content in general. Content is content, it doesn’t matter if it was written by a human or a machine if it’s useful or entertaining.

I know this project is just a fun prototype, but I’m sure it will be used to generate low-quality filler garbage which will then be sold at prices similar to high-quality books written by humans. That feels obviously wrong to me.

[–] sisyphean 3 points 1 year ago

Yeah, the situation seems pretty clear

[–] sisyphean 3 points 1 year ago (2 children)

Can you tell us more about what they are like?

184
Who even uses Celsius (programming.dev)
 
277
Who even uses Celsius (programming.dev)
 
 

I’m a moderator of a smaller community. I’m posting quality content multiple times a day, and I posted about it in New Communities. The number of subscribers is low but it’s growing steadily.

Could you please give me some advice on growing this community? I don’t want to spam/flood or come off as rude or weird, but I really believe in it and think it would be useful to many people.

398
i++ (programming.dev)
 
 
 
 

Excellent Twitter thread by @goodside 🧵:

The wisdom that "LLMs just predict text" is true, but misleading in its incompleteness.

"As an AI language model trained by OpenAI..." is an astoundingly poor prediction of what a typical human would write.

Let's resolve this contradiction — a thread: For widely used LLM products like ChatGPT, Bard, or Claude, the "text" the model aims to predict is itself written by other LLMs.

Those LLMs, in turn, do not aim to predict human text in general, but specifically text written by humans pretending they are LLMs. There is, at the start of this, a base LLM that works as popularly understood — a model that "just predicts text" scraped from the web.

This is tuned first to behave like a human role-playing an LLM, then again to imitate the "best" of that model's output. Models that imitate humans pretending to be (more ideal) LLMs are known as "instruct models" — because, unlike base LLMs, they follow instructions. They're also known as "SFT models" after the process that re-trains them, Supervised Fine-Tuning.

This describes GPT-3 in 2021.

SFT/instruct models work, but not well. To improve them, their output is graded by humans, so that their best responses can be used for further fine-tuning.

This is "modified SFT," used in the GPT-3 version you may remember from 2022 (text-davinci-002). Eventually, enough examples of human grading are available that a new model, called a "preference model," can be trained to grade responses automatically.

This is RLHF — Reinforcement Learning on Human Feedback. This process produced GPT-3.5 and ChatGPT. Some products, like Claude, go beyond RLHF and apply a further step where model output is corrected and rewritten using feedback from yet another model. The base model is tuned on these responses to yield the final LLM.

This is RLAIF — Reinforcement Learning with AI Feedback. OpenAI's best known model, GPT-4, is likely trained using some other extension of RLHF, but nothing about this process is publicly known. There are likely many improvements to the base model as well, but we can only speculate what they are. So, do LLMs "just predict text"?

Yes, but perhaps without with the "just" — the text they predict is abstract, and only indirectly written by humans.

Humans sit at the base of a pyramid with several layers of AI above, and humans pretending to be AI somewhere in the middle. Added note:

My explanation of RLHF/RLAIF above is oversimplified. RL-tuned models are not literally tuned to predict highly-rated text as in modified SFT — rather, weights are updated via Proximal Policy Optimization (PPO) to maximize the reward given by the preference model. (Also, that last point does somewhat undermine the thesis of this thread, in that RL-tuned LLMs do not literally predict any text, human-written or otherwise. Pedantically, "LLMs just predict text" was true before RLHF, but is now a simplification.)

 

You know the video is going to be the most interesting thing you watched this week when this unkempt guy with the axe on the wall appears in it.

But seriously, he is one of the best at explaining LLM behavior, very articulate and informative. I highly recommend watching all of his Computerphile videos.

 
 
41
submitted 1 year ago* (last edited 1 year ago) by sisyphean to c/python
 
 

OpenAI’s official guide. Short and to the point, no bullshit, covers the basics very well.

view more: ‹ prev next ›