"Dr. David Ma, a professor of nutritional sciences at the University of Guelph, says a person weighing 70 kilograms would have to drink about 15 cans of diet pop a day to exceed that daily limit." And don't forget all of the studies about what sugar does to your body, which people always forget about while talking about aspartame. There will be a lot of people choosing sugar over aspartame because of these headlines.
Peanutbjelly
But I thought the Mickey mouse protection act has only served to increase the diversity, well-being and development of artists everywhere!
Right?
Or is the reality that Disney and Warner can just buy all the art rights, sitting on those for the next hundred years in an endless cycle of power and wealth consolidation?
Nobody saw that coming at all.
Right?
I'm pretty sure the system has been severely skewed unfavorably for normal people.
It's comparing a bird to a plane, but I still think the process constitutes "learning," which may sound anthropomorphic to some, but I don't think we have a more accurate synonym. I think the plane is flying even if the wings aren't flapping and the plane doesn't do anything else birds do. I think LLMs, while different, reflect the subconscious aspect of human speech, and reflect the concept of learning from the data more than "copying" the data. It's not copying and selling content unless you count being prompted into repeating something it was trained on heavily enough for accurate verbatim reconstruction. To me, that's no more worrying than Disney being able to buy writers that have memorized some of their favorite material, and can reconstruct it on demand. If you ask your intern to reproduce something verbatim with the intent of selling it. I still don't think the training or "learning" were the issues.
To accurately address the differences, we probably need new language and ideals for the specific situations that arise in the building of neural nets, but I still consider much of the backlash completely removed from any understanding of what has been done with the "copywrited material."
I tend to view it thinking about naturally training these machines in the future with real world content. Should a neural net built to act in the real world be sued if an image of a coca-cola can was in the training data somewhere, and some of the machines end up being used to make cans for a competitor?
How many layers of abstraction, or how much mixture with other training data do you need to not consider that bit of information to be comparable to the crime of someone intentionally and directly creating an identical logo and product to sell?
Copyright laws already need an overhaul prior to a.i.
It's no coincidence that warner and Disney are so giant right now, and own so much of other people's ideas. That they have the money to control what ideas get funded or not. How long has Disney been dead? More than half a century. So why does his business own the rights of so many artists who came after?
I don't think the copywrite system is ready to handle the complexity of artificial minds at any stage, whether it is the pareidolic aspect of retrieving visual concepts of images in diffusion models, or the complex abilities that arise from current scale LLMs? which again, I believe are able to resemble the subconscious aspect of word predictions that exists in our minds
We can't even get people to confidently legislate a simple ethical issue like letting people have consensual relationships with the gender of their own choice. I don't have hope we can accurately adjust at each stage of development of a technology so complex we don't even have the language to properly describe the functioning. I just believe that limiting our future and important technology for such grotesquely misdirected egoism would be far more harmful than good
The greater focus should be in guaranteeing that technological or creative developments benefit the common people, not just the rich. This should have been the focus for the past half century. People refuse this conceptually because they've been convinced that any economic re-balancing is evil when it benefits the poor. Those with the ability to change anything are only incentivized to help themselves.
But everyone is just mad at the machine because "what if it learned from my property?"
I think the article even promotes Adobe as the ethical alternative. Congrats, you've limited the environment so that only the existing owners of everything can advance. I don't want to pay Adobe a subscription for the rest of my life for the right to create on par with more wealthy individuals. How is this helping the world or creation of art?
This is the thing I kept shouting when diffusion models took off. People are effectively saying "make it illegal for neural nets to learn from anything creative or productive anywhere in any way"
Because despite the differences in architecture, I think it is parallel.
If the intent and purpose of the tool was to make copies of the work in a way we would consider theft of done by a human, I would understand.
The same way there isn't any legal protection on neural nets learning from personal and abstract information to manipulate and predict or control the public, the intended function of the tool should make it illegal.
But people are too self focused and ignorant to riot enmass about that one.
The dialogue should also be in creating a safety net as more and more people lose value in the face of new technology.
But fuck any of that, what if an a.i. learned from a painting I made ten year ago, like every other artists who may have learned from it? Unforgivable.
I don't believe it's reproducing my art, even if asked to do so, and I don't think I'm entitled to anything.
Also copyright has been fucked for decades. It hasn't served the people since long before the Mickey mouse protection act.
I believe it will require a level and pace of informational processing that is far beyond what humans will accomplish alone. just having a system that can efficiently sift through the excess existing papers, and find correlations or contradictions would be amazing for development of new technology. if you are paying attention to any environmental sciences right now, it's terrifying in an extremely real and tangible way. we will not outpace the collapse without an intense increase in technological development.
if we bridge the gap of analogical comprehension in these systems, they could also start introducing or suggesting technologies that could help slow down or reverse the collapse. i think this is much more important than making sure sarah silverman doesn't have her work paraphrased.
Personally I find this stupid. If we have robots walking around, are they going to be sued every time they see something that's copywrited?
It's this what will stop progress that could save us from environmental collapse? That a robot could summarize your shitty comedy?
Copywrite is already a disgusting mess, and still nobody cares about models being created specifically to manipulate people en mass. "What if it learned from MY creations" asks every self obsessed egoist in the world.
Doesn't matter how many people this tech could save after another decade of development. Somebody think of the [lucky few artists that had the connections and luck to make a lot of money despite living in our soul crushing machine of a world]
All of the children growing up abused and in pain with no escape don't matter at all. People who are sick or starving or homeless do no matter. Making progress to save the world from immanent environmental disaster doesn't matter. Let Canada burn more and more every year. As long as copywrite is protected, all is well.
It's faux pas to even defend yourself, or question the framing of a dialogue or call out legitimate direct discrimination. If you think labeling an entire group as the evil enemy is going to make the bad actors or moderates in the group more likely to align with you, you are an idiot. On Reddit I moved to the leftist subreddit /r/onguardforthee when /r/Canada became too right wing and I'd started to see directly bigoted comments more often.
I got banned from the new subreddit for saying "hey maybe calm down with the direct racism and sexism here. We should be better than those we criticize."
If you're defending "the bad ones" you're the enemy.
I've been thoroughly egalitarian and anti-bigotry my entire life. I've also been beaten until my eyes were swollen shut in school by people I didn't know, and punished for "instigating with racist language" that I would never use because the older two kids knew it would get them out of trouble. I was just waiting to get into the library to read a book.
I've been accused countless times of racism working in retail because of things I had no control over. (Shout out to my old manager Om for calling out their bullshit)
I've been told in no uncertain terms by another manager that I would not have been hired if they were there at the start because they "do not hire men."
I've been told countless times I should not even be allowed to speak or have an opinion due to the body I was born into. That any action I take is directly unfair or harmful regardless of my intent or reasoning. I don't define myself or others by their bodies. Nobody chose their body.
Etc.
Would you think defending this sort of behavior really helps to reduce bigotry?
It's really just making me hate all of humanity. Everyone is terrible and being reasonable is an unforgivable sin on every side.
No nuance is allowed. If you don't agree with incredibly broad generalizations, you are evil. American history and culture is globally applicable and enforced.
I just want people to stop judging and mistreating others for things they have no control over. I guess I deserve to be hated or mistreated for that alone.
thank you for your response. i appreciate your thoughts, but i still don't fully agree. sorry for not being succinct in my reply. there is a TLDR.
-
like i said, i don't think we'll get AGI or superintelligence without greater mechanistic interpretability and alignment work. more computational power and RLHF aren't going to get us all the way there, and the systems we build long before then will help us greatly in this respect. an example would be the use of GPT4 to interpret GPT2 neurons. i don't think they could be described as a black box anyway, assuming you mean GPT LLMs specifically. the issue is understanding some of the higher-dimensional functioning and results, which we can still build a heuristic understanding for. i think a complex AGI would only use this type of linguistic generation for a small part of the overall process. we need a parallel for human abilities like multiple trains of thought and the ability to do real-time multimodal world mapping. once we get the interconnected models, the greater system will have far more interpretable functioning than the results of the different models on their own. i do not currently see a functional threat in interpretability.
-
i mean, nothing supremely worse than we can do without. i still get more spam calls from actual people, and wide-open online discourse has already had some pretty bad problems without AI. just look at 4chan, i'd attribute trump's successful election to their sociopathic absurdism. self-verified local groups are still fine. also, go look on youtube for what yannic kilcher did to them alone a year or so ago. i think the biggest thing to worry about is online political dialogue and advertising, which are already extremely problematic and hopeless without severe changes at the top. people won't care about what fake people on facebook are saying when they are rioting for other reasons already. maybe this can help people learn better logic and critical thought. there should be a primary class in school by now to do statistical analysis and logic in social/economic environments.
-
why? why would it do this? is this assuming parallels to human emotional responses and evolution-developed systems of hierarchy and want? what are the systems that could even possibly lead to this that aren't extremely unintelligent? i don't even think something based on human neurology like a machine learning version of multi-modal engram-styled memory mechanics would lead to this synthetically. also, i don't see the LLM style waluigi effect as representative of this scenario.
-
again, i don't believe in a magically malevolent A.I. despite all of our control during development. i think the environmental threat is much more real and immediate. however, A.I. might help save us.
-
i mean, op's issue already existed before A.I., regardless of whether you think it's the greater threat. otherwise, again, you are assuming malevolent superintelligence, which i don't believe could accidentally exist in any capacity unless you think we're getting there through nothing but increased computational power and RLHF.
TLDR: i do not believe an idiotic super-intelligence could destroy the world, and i do not believe a super intelligence would destroy the world without some very specific and intentional emotionally intentioned emulations. generally, i believe anything that capable would have the analogical comprehension to understand the intention of our requests, and would not have any logical reason to act against it. the bigger concern isn't the A.I., but who controls it, and how to best use it to save our world.
They always focus on real estate value. Who of that under 45 demographic lowest earners can afford anything but survival and renting? Why is there absolutely no mention of the failure of antitrust? Brand conglomerates jacking up priced for pure profit because what are people going to do, use local goods from shops that are gouged for their basic operation costs? I work at a small local place, and I'm pretty sure the owners are as tired and depressed as the staff right now. Working harder than ever, and losing more money than we make.
All we can ethically do is shout and cry and be noticed, but the last few decades show how much that has stopped the trend.
Something has to change or riots will be inevitable. Automation isn't going to go backwards, and it's absurd the working class hasn't seen any improvement in their lives by this point.
-
Why would we be wiped out if they were properly instructed to be symbiotic to our species? This implies absolutele failure at mechanistic interpretability and alignment at every stage. I don't think we'll succeed in creating the existential viable intelligence without crossing that hurdle.
-
Most current problems already happen without a.i. and the machines will get better, we will not. From spam to vehicles, a.i. will be the solution, not the problem. I do think we should prioritize on dealing with the current issues, but I don't think they are unscalable by any means.
-
Why? And why do you think intelligence of that level still couldn't handle the concept of context? Either it's capable of analogical thinking, or it isn't an existential threat to begin with. RLHF doesn't get us super intelligence.
-
Again this assumes we've completely failed development, in which case environmental collapse will kill us anyway.
-
Hey a real problem. Consolidation of power is already an issue without A.I. It is extremely important we figure out how to control our own political and corporate leaders. A.I. is just another tool for them to fuck us, but A.I. isn't the actual problem here.
wow. was expecting to agree heavily with the article, but it felt like reading intangible fluff. is that just me?
when they do talk directly about the issue, they say things like " AI models that scrape artists’s work without compensation" which is not how i would phrase 'actual concerns.' no mention of things like "models specifically built to manipulate the general populace." which i see still gets no attention. having models learn from the world we live in to create a general use model is not the issue, unless you're thinking about biases. i'm an artist, and for 20 years i've witnessed how terrible artist communities are at understanding things like copywrite and economic imbalance/redirection. this is unfortunate because artists are getting screwed, but the take on it is just wrong. this article doesn't touch any of it in a meaningful way.
there are definitely issues with alignment and interpretability that need to be understood as we move forward, and that's where the effort should be focused in these academic settings. if you want to focus on the existential, you should be directly viewing current models and how they could steer towards or away from such threats conceptually, while working on understanding and interpreting said models. we aren't just giving superpowers to an RLHF narrow model.
at least they mentioned climate change, since climate and economic imbalance are really the main existential risks we currently face. A.I. development is likely the only thing that will help us.
i think melanie mitchell is supposed to be interviewed on MachineLearningStreetTalk soon, and i assume she will have a good take on it.
so, TLDR: we likely won't have progress without interpretability, and that should in mind while developing better machines. don't read this article, just wait for the melanie mitchell interview, as i'm sure she's heated after that frustrating munk debate.
It's almost like we need an entirely new legal framework to ensure the non wealthy a standard of living while being continuously devalued over time by me technological developments. Artists already sell their souls to survive in this "market."