this post was submitted on 28 Oct 2024
20 points (91.7% liked)
Opensource
1314 readers
19 users here now
A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!
⠀
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Garbage. What this says to me is that they're going to allow companies that create models that were trained on data that would be illegal for you and me to scrape and regurgigate, to keep the data to themselves as long as they "provide enough information" for someone else that lacks the resources or legal impunity that companies have to theoretically re-steal the data. Which, you know, means that the models won't be reproducible by any reasonable standard, and can't actually be called open source.
But the OSI is just a handful of companies in a trenchcoat, so I'm not surprised by what they would call "open".
the actual license text part being questioned .
(The rest of the license goes on to talk about weights, etc).
I agree with you somewhat. I'm glad that each source does need to be listed and described. I'm less thrilled to see "unshareable" data and data that cost $ in there since i think these have potential to effectively make a model not able to be retrained by a "skilled person".
It's a cheap way to make an AI license without making all the training data open source (and dodging the legalities of that).
Thanks for sharing the actual license text.
To me, this stinks of companies knowing that if they're actually required to reproduce the data, they'll get hit with copyright infringement or other IP-related litigation. Whereas if they can just be trusted to very honestly list their sources, they can omit the sources they weren't authorized to steal and reproduce content from, they can get away with it.
I think that, in practice, this means that the industry standard will be to lie and omit the incriminating data sources, and when someone tries to reproduce the model they won't actually be able to, but they also won't be able to easily prove one way or another if data was withheld.
Really, what should (but won't) happen, is that we should fix our broken IP laws and companies should be held to account for when they engage in behavior that would be prosecuted as piracy or Computer Fraud and Abuse if you or I did it.
AI is pretty much the epitome of companies getting to act with impunity in the eyes of the law and exerting that power over everyone else, and it's annoying to see it get a blessing from an "open source" organization.
Right, the other thing i considered is that you could just create a company and "buy" the data from them for a ridiculous amount of money and then you have less requirement to detail the data. Similarly you could deem the data unsharable and fudge the provenance.
Like locks, it will only keep honest people honest.