this post was submitted on 26 Aug 2024
49 points (96.2% liked)

Fuck AI

1173 readers
128 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 6 months ago
MODERATORS
 

I ran an AI startup back in 2017 and this was a huge deal for us and I’ve seen no actual improvement in this problem. NYTimes is spot on IMO

you are viewing a single comment's thread
view the rest of the comments
[–] nulluser -3 points 3 weeks ago* (last edited 3 weeks ago)

Completely irrelevant. The title and posted article are talking about unintentionally training LLM text generation models with prior output of other AI models. Not having enough training data for other types of models is a completely different problem and not what the article is about.

Nobody is going to "trawl the web for new data to train their next models” (to quote the article) for a model trying to cure diseases.