this post was submitted on 23 Apr 2024
772 points (99.0% liked)

Technology

58303 readers
17 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 7 months ago (1 children)

Is it really a solution, though, or is it just GIGO?

For example, GPT-4 is about as biased as the medical literature it was trained on, not less biased than its training input, and thereby more inaccurate than humans:

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(23)00225-X/fulltext

[โ€“] [email protected] 1 points 7 months ago

All the latest models are trained on synthetic data generated on got4. Even the newer versions of gpt4. Openai realized it too late and had to edit their license after Claude was launched. Human generated data could only get us so far, recent phi 3 models which managed to perform very very well for their respective size (3b parameters) can only achieve this feat because of synthetic data generated by AI.

I didn't read the paper you mentioned, but recent LLM have progressed a lot in not just benchmarks but also when evaluated by real humans.