this post was submitted on 26 Jul 2024
231 points (97.1% liked)

science

14445 readers
2 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 4 months ago* (last edited 4 months ago) (3 children)

A few years ago, people assumed that these AIs will continue to get better every year. Seems that we are already hitting some limits, and improving the models keeps getting harder and harder. It’s like the linewidth limits we have with CPU design.

[–] [email protected] 11 points 4 months ago (1 children)

I think that hypothesis still holds as it has always assumed training data of sufficient quality. This study is more saying that the places where we've traditionally harvested training data from are beginning to be polluted by low-quality training data

[–] [email protected] 20 points 4 months ago (1 children)

It's almost like we need some kind of flag on AI-generated content to prevent it from ruining things.

[–] [email protected] 1 points 4 months ago (1 children)

If that gets implemented, it would help AI devs and common people hanging online.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

File it under "too good to happen". Most writing jobs are proofreading AI-generated shit these days. We'll need to wait until there's real money in writing scripts to de-pollute content.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

no, not really. the improvement gets less noticeable as it approaches the limit, but I'd say the speed at which it improves is still the same. especially smaller models and context window size. there's now models comparable to chatgpt or maybe even gpt 4.0 (I don't remember, one or the other) with context window size of 128k tokens, that you can run on a GPU with 16gb of vram. 128k tokens is around 90k words I think. that's more than 4 bee movie scripts. it can "comprehend" all of that at once.

[–] [email protected] 2 points 4 months ago

No they are increasingly getting better, mostly they fit in a bigger context of other discoveries