Agreed. The AI is rolling already, we can't stop it now. All we can do is make sure that this technology benefits everyone, not just coroprations.
Technology
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
We are coming to a reckoning not only of by who AI is used but how society is handled overall. We are getting to a point even intellectual work can be automated, and however spotty it might be now, AI will only get better at it. For the many, many people who will see their jobs automated, their biggest concern is not whether they will be allowed to use ChatGPT, it's whether they will have any kind of livelihood.
We are used to the idea that automation frees people to work less strenuous, more satisfying jobs, but what are we being freed to if even artistic expression is taken away from the hands of people? Rethinking how AI and automation benefits everyone needs to be done in a much larger scale.
Many things in life are a privilege for these groups. AI is no different.
I'm not sure what you're getting at with this. It will only be a privilege for these groups of we choose to artificially make it that way. And why would you want to do that?
Do you want to give AI exclusively to the rich? If so, why?
I think he was just stating a fact.
For something to be a fact, it needs to actually be true. AI is currently accessible to everyone.
I disagree. I can barely run a 13B parameter model locally. Much less a 175B parameter model like GPT3. Or GPT4, whatever that model truly is. Or whatever behemoth of a model the NSA almost certainly has and just hasn't told anyone about. I'll eat my sock if the NSA doesn't have a monster LLM along with a myriad of other special purpose models by now.
And even though the research has (mostly) been public so far, the resources needed to train these massive models is out of reach for all but the most privileged. We can train a GPT2 or GPT-Neo if we're dedicated, but you and I aren't training an open version of GPT4.
AI is more than just ChatGPT.
When we talk about reinterpreting copyright law in a way that makes AI training essentially illegal for anything useful, it also restricts smaller and potentially more focused networks. They're discovering that smaller networks can perform very well (not at the level of GPT-4, but well enough to be useful) if they're trained in a specific way where reasoning steps are spelled out in the training.
Also, there are used nvidia cards currently selling on Amazon for under $300 with 24 gigs of ram and AI performance almost equal to a 3090, which puts group-of-experts models like a smaller version of GPT-4 within reach of people who aren't ultra-wealthy.
There's also the fact that there are plenty of companies currently working on hardware that will make AI significantly cheaper and more accessible to home users. Systems like ChatGPT aren't always going to be restricted to giant data centers, unless (as some people really want) laws are passed to prevent that hardware from being sold to regular people.
I want to be clear that I don't disagree with your premise and your assertion that AI training should be legal regardless of copyright of the training material. My only point was that the original commenter said the ultra-elites have privilege over us little guys, and he was right in that regard. I have no ideas how that plays into his opinion on this whole matter, only that what he said on its face is accurate.
But you can run it.
I've got a commodity GPU and I've been doing plenty of work with local image generation. I've also run and fine-tuned LLMs, though more out of idle interest than for serious usage yet. If I needed to do more serious work, renting time on cloud computing for this sort of thing actually isn't all that expensive.
The fact that the very most powerful AIs aren't "accessible" doesn't mean that AI in general isn't accessible. I don't have a Formula 1 racing car but automobiles are still accessible to me.
If we're just talking about what you can do, then these laws aren't going to matter because you can just pirate whatever training material you want.
But that is beside my actual point, which is that there is a practical real-world limit to what you, the little guy, and they, the big guys, can do. That disparity is the privilege that OP way back up at the top mentioned.
I have no idea what that original commenter's opinion on copyright vs training is. Personally I agree with the OP-OP of the whole thread. Training isn't copying, and even if it were the public interest outweighs the interests of the copyright holders in this regard. I'm just saying that in the real world there is a privilege that that the elites and ultra-corps have over us, regardless of what systems we set up unless capitalism and society as a whole is upended.
At this point we're just bickering over semantics.
So clearly we do agree on most of this stuff, but I did want to point out a possibility you may not have considered.
If we're just talking about what you can do, then these laws aren't going to matter because you can just pirate whatever training material you want.
This depends on the penalty and how strictly it's enforced. If it's enforced like normal copyright law, then you're right; your chances of getting in serious trouble just for downloading stuff are essentially nil -- the worst thing that will happen to you is your ISP will three-strikes you and you'll lose internet access. On the other hand, there's a lot of panic surrounding AI, and the government might use that as an excuse to pass laws that would give people prison time for possessing one, and then fund strict enforcement. I hope that doesn't happen, but with rumblings of insane laws that would give people prison time for using a VPN to watch a TV show outside of the country, I'm a bit concerned.
As for the parent comment's motivations, it's hard to say for sure with any particular individual, but I have noticed a pattern among neoliberals where they say things like "well, the rich are already powerful and we can't do anything about it, so why try" or "having universal health care, which the rest of the first world has implemented successfully, is unrealistic, so why try" and so on. It often boils down to giving lip service to progressive social values while steadfastly refusing to do anything that might actually make a difference. It's economic conservatism dressed as progressivism. Even if that's not what they meant (and it would be unwise of me to just assume that), I feel like that general attitude needs to be confronted.
If I'm the "parent comment" you're referring to, then that's very much not my motivation. I'm just pointing out that "AI is accessible to everyone" is not a hard binary situation, and that while it may be true that big giant corporations have an advantage due to being big giant corporations with a ton of resources to throw at this stuff AI is indeed still accessible to some degree to the average consumer.
Well, again, "the average consumer" being first-world individuals with the resources to buy a nice computer and spend time playing with it. These things are a continuum and that's not the end point of it, you can always go further down the resource rankings and find people for whom AI is not "accessible" by whatever standards. Unfortunately it's kind of accepted as a given that people on the poor end of the spectrum don't have access to this kind of stuff or will have to depend on external service providers.
Seriously, the average person has two FAR more immediate problems than not being able to create their own AI:
-
Losing their livelihood to an AI.
-
Losing their life because an AI has been improperly placed in a decision making position because it was sold as having more capabilities than it actually has.
1 could be solved by severe and permanent economic reforms, but those reforms are very far away. 2 is also going to need legal restrictions on what jobs an AI can do, and restrictions on the claims that an AI company can make when marketing their product. Possibly a whole freaking government agency designated for certifying AI.
Right now, it's in our best interest that AI production is slowed down and/or prevented from being deployed to certain areas until we've had a chance for the law to catch up. Copyright restrictions and privacy laws are going to be the most effective way to do this, because it will force the companies to go back and retrain on public domain and prevent them from using AI to wholesale replace certain jobs.
As for the average person who has the computer hardware and time to train an AI (bear in mind Google Bard and Open AI use human contractors to correct misinformation in the answers as well as scanning), there is a ton of public domain writing out there.
The endgame, though, is to stop scenario 1 and scenario 2, and the best way to do that is any way that forces the people who are making AI to sit down and think about where they can use the AI. Because the problem is not the speed of AI development, but the speed of corporate greed. And the problem is not that the average person LACKS access to AI, but that the rich have TOO much access to AI and TOO many horrible plans about how to use it before all the bugs have been worked out.
Furthermore, if they're using people's creativity to make a product, it's just WRONG not to have permission or to not credit them.
Losing their life because an AI has been improperly placed in a decision making position because it was sold as having more capabilities than it actually has.
I would tend to agree with you on this one, although we don't need bad copyright legislation to deal with it, since laws can deal with it more directly. I would personally put in place an organization that requires rigorous proof that AI in those roles is significantly safer than a human, like the FDA does for medication.
As for the average person who has the computer hardware and time to train an AI (bear in mind Google Bard and Open AI use human contractors to correct misinformation in the answers as well as scanning), there is a ton of public domain writing out there.
Corporations would love if regular people were only allowed to train their AIs on things that are 75 years out of date. Creative interpretations of copyright law aren't going to stop billion- and trillion-dollar companies from licensing things to train AI on, either by paying a tiny percentage of their war chests or just ignoring the law altogether the way Meta always does, and getting a customary slap on the wrist. What will end up happening is that Meta, Alphabet, Microsoft, Elon Musk and his companies, government organizations, etc. will all have access to AIs that know current, useful, and relevant things, and the rest of us will not, or we'll have to pay monthly for the privilege of access to a limited version of that knowledge, further enriching those groups.
Furthermore, if they're using people's creativity to make a product, it's just WRONG not to have permission or to not credit them.
Let's talk about Stable Diffusion for a moment. Stable Diffusion models can be compressed down to about 2 gigabytes and still produce art. Stable Diffusion was trained on 5 billion images and finetuned on a subset of 600 million images, which means that the average image contributes 2B/600M, or a little bit over three bytes, to the final dataset. With the exception of a few mostly public domain images that appeared in the dataset hundreds of times, Stable Diffusion learned broad concepts from large numbers of images, similarly to how a human artist would learn art concepts. If people need permission to learn a teeny bit of information from each image (3 bytes of information isn't copyrightable, btw), then artists should have to get permission for every single image they put on their mood boards or use for inspiration, because they're taking orders of magnitude more than three bytes of information from each image they use for inspiration on a given work.
I've been thinking along your line. My concern has been that dictatorships would violate the western copyright and would thus go further than the west and especially europeans, who are heading to very strict laws. It's a nightmare scenario.
And your concern on the rich only makes sense to me, too.
You have not clearly defined the danger. You just said "ai is here". Well, lawyers are here too and they have the law on their side. Also the ai will threaten their model, so they will probably have no mercy anyway and will work full time on the subject.
Wealthy and powerful corporations fear the law above anything else. A single parliament can shut down their activity better than anyone else on the planet.
Maybe you talk from the point of view of a corrupt country like the USA, but the EU parliament, which BTW doesn't host any GAFAM, is totally ready to strike hard on the businesses founded on AI.
See, people doesn't want to lose their job to a robot and they will fight for it. This induces a major threat to the ai: people destroying data centers. They will do it. Their interests will converge with the interest of the people caring about global warming. Don't take the ai as something inevitable. An ai has a high dependency on resources and generates unemployment and pollution, and a questionable value.
An AI requires:
Energy
Water
High tech hardware
Network
Security
Stability
Investments
It's like a nuclear powerplant but more fragile. If an activist group takes down a datacenter hosting an ai, who will blame them? The jury will take turns to high five them.
I don't think the EU is so lawless as to allow blatant property destruction, and if it is, I can't imagine such a lack of rule of law will do much for the EU's future economic prosperity.
I'm probably just a dumb hick American though.
Wow, you have this all planned out, don't you?
If that's what Europe is like, they'll build their data centers somewhere else. Like the corrupt USA. Again, you'll be taking away your access to AI, not theirs.