But for that brief moment, we all got to laugh at it because it said to put glue on pizza.
All worth it!
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
But for that brief moment, we all got to laugh at it because it said to put glue on pizza.
All worth it!
The problem isn't the rise of "AI" but more so how we're using it.
If a company wants to create a machine learning model that analyzes metrics on an automated production line and spits out parameters to improve the efficiency of their equipment, that's a great use of the technology. We don't need a LLM to produce a useless summary of what it thinks is a question when all I want is a page of search results.
AI is a tool. what matters is what humans do with the tool, not the tool itself
So than rename this place fuck people...
This is a strawman argument. AI is a tool. Like any tool, it's used for negative things and positive things. Focusing on just the negative is disingenuous at best. And focusing on AI's climate impact while completely ignoring the big picture is asinine (the oil industry knew they were the primary cause of climate change more than 60 years ago).
AI has many positive use-cases yet they are completely ignored by people who lack logic and rationality.
AI is helping physicists speed up experiments into supernovae to better understand the universe.
AI is helping doctors to expedite cancer screening rates.
AI is powering robots that can do the dishes.
AI is also helping to catch illegal fishing, tackle human trafficking, and track diseases.
Yes, ai is a tool. And the person in the screenshot is criticizing a generative gpt-like and midjorney-like ai, which has a massive impact on the climate and almost no useful results.
In your examples, as I can see, they always train their own model (supernovae research, illegal fishing) or heavily customize it and use it in close conjunction with people (cancer screenings).
And so I think we talking about two different things, so I want to clarify:
ai as in neural-network algorithm that can digest massive amounts of data and give meaningful results - absolutely is useful and, I think, the more the time will pass (and more grifters move on to other fields) the more actual useful niches and cases would be solved with neural-nets.
But, ai as in we-gonna-shove-this-bot-down-your-throut gpt-like bots trained on all the data from all the internet (mostly reddit) that struggle with basic questions, hallucinate glue on pizza, generate 6-fingered hands and are close to useless in any use-case are absolutely abismal and not worth it to ruin our climate for.
but those are the cool interesting research related AIs, not the venture capital hype LLMs that will gift us AGI any day now with just a bit more training data/compute.
Obviously by AI they mean stuff like ChatGPT. An energy intensive toy where the goal is to get it into the hands of as many paying customers as possible. And you're doing free PR for them by associating it with useful small scale research projects. I don't think most researchers will want to associate their projects with AI now that the term has been poisoned, though they might have to because many bigwigs have been sucked into the hype. The term AI has basically existed nebulously since the beginning of computing, so whether we call one thing or another AI is basically personal taste. Companies like OpenAI have successfully attached their product to the term and have created the strongest association, so ultimately if you say AI in a contemporary context a lot of people are hearing GPT-like.
Yeah, but it doesn't really help that this is a community "Fuck AI" made as "A place for all those who loathe machine-learning...". It's like saying "I loathe Dijsktra's algorithm". The term machine learning has been used since at least the 50's and it involves a lot of elegant mathematics which all essentially just try to perform optimizations of various functions in various ways. And yet, at least in places I'm exposed to, people constantly present any instance of machine learning as useless, morally wrong, theft, ineffective compared to "traditional methods" and so on, to the point where I feel uneasy telling people that I'm doing research in that area, since there's so much hate towards the entire field, not just LLMs. It might be because of them, sure, but in my experience, the popular hating of AI is not limited to ChatGPT, corporations and the like.
It is a sad thing to see. The education system especially here in the US has really failed many people. I was always super curious and excited about machine intelligence from a young age. I was born in the mid 90's. I've been dreaming about automation myself out of work so I could create and do what I love and spend more time with the people I love, and just explore and learn and grow. As a kid I noticed two things that made adults miserable:
I went to school for CS and eventually had to drop because of personal life and mental health struggles, but I'm still interested in joining the field and open source. People sometimes make me feel really sad, and misunderstood, plus discourages me from even bothering because they're so negative. I know how we got here, but it's sad it's this predictable a reaction.
By 2014-2015 I was watching a lot of machine learning videos on YouTube, playing with style GANs etc. The fact a computer could even "apply a style" and keep the image coherent was just endlessly fascinating and it made for a lot of cool photos etc. Watching AlphaGo beat a world champion in Go using Q-learning techniques and self-play was incredible. I just love it. I love future tech and I think we can have social and economic equity and much less wealth and income inequality with all this.
A lot of people don't realize labor adds a lot to the cost of what they buy and there are only so many workers. Having even today's LLMs fully implemented and realized as agents (this is very quickly coming about) things will slowly get cheaper and better, then likely more rapidly. Software development will be cheaper. Engineers, game designers and artists will bring to life incredible things that we haven't thought up yet, and will likely use these tools/entities to enhance their workflows. Yes there will be less artist grunt work, and there will be affects on jobs. It's just not going to stop anyone from doing what they love however they like to do it. It's so odd to me.
Cheers and keep your head up. If we get this right I think people will change their tune, but probably not until they see economic and quality of life improvements. Though, I'd say machine learning and machine intelligence has added a great deal of value and opportunity in my life. I wish everyone a good future regardless of how your feel about this. I just hope people who aren't in the field or weren't enthusiastic before will at least remember there are a lot of real, kind, and extremely intelligent people working really hard on this stuff, and truly believe they can make a substantial positive impact. We should be more trusting and open. It's really hard to do it, we can get burned, but most people want decent things for most others despite disagreement and don't strife. We've made it this far. Let's go to the stars 🤠
I thought the statement was more about how much energy it costs to run AI while the planet burns.
And focusing on AI's climate impact while completely ignoring the big picture is asinine
immediately goes to whataboutism and chooses big oil as an example. Pure gold.
If you're complaining about climate impact, looking at the big picture isn't whataboutism. It's the biggest part of the dataset which is important for anyone who actually cares about the issue.
You're complaining about a single cow fart, then when someone points out there are thousands of cars with greater emissions - you cry whataboutism. It's clear you just want to have a knee-jerk reaction without rationally looking at the problem.
But these are other applications of AI. I think he meant LLMS. That would be like saying "fitting functions has many other applications and can be used for good".
If all fossil fuel power plants were converted to nuclear then tech power consumption wouldn't even matter. Again, it was the oil industry that railroaded nuclear power as being unsafe.
Most of the people on this website hate AI without even understanding it, and refuse to make an honest assessment of its capabilities, instead pretending that it's nothing more than a good auto correct predictor engine.
I can't wait for the AI bubble to burst
So 2+2 = boobs?
2+2 = ancap hidden folder
Oh if my math class was like that...
Just turn the calculator upside down…
Is not the entire picture, we are destroying our planet to generate bad art, fake tities and search a little bit faster but with the same chance of being entirely wrong as just googleing it.
Is it actually a little bit faster though?
Well, it's definitely more wrong , so there's that.
Only because Google purposely crippled itself...
On the grand scheme of things, I suspect we actually don't have that much power in stopping the industrial machine.
Even if every person on here, on Reddit, and every left-leaning social media revolted against the powers that be right now, we wouldn't resolve anything. Not really. They'd send the military out, shoot us down (possibly quite literally), then go back to business as usual.
Unless there becomes a business incentive to change our ways, then capitalism will not follow, and instead it'll do everything it can to resist that change. By the time there is enough economic inventive, it'll be far too late to be worth fixing.
I mean, this isn't just a social media thing. It was part of the reason there was a writer's strike in Hollywood and they did manage to accomplish something. I don't see why protests/strikes/politics would be useless here.
A lot of people on Lemmy are expecting the glorious revolution to happen any time now and then we will live in whatever utopia they believe makes a utopia. Even if something like that happens, and I'm less certain by the day that it ever will, the result isn't necessarily any better than what came before. And often worse.
It'll almost certainly be worse. When revolutions happen, the people who seize power are the ones who were most prepared, organized and willing to exercise violence. Does that at all sound like leftists in the West?
The only way to enact utopia is by making it so popular an idea that the propaganda machine gets drowned out. This is going to be a very long and slow process that may never end. But we can always aim for "not worse" and if we can do that, we can also aim for "a little better". Anything faster than those baby steps feels really far from possible, but those baby steps are always worth taking.
See, the thing is, dead people don't buy as many things as live ones, so extreme capitalism doesn't want to kill you directly either. Slow poison is fine if profitable enough, but fast intentional bullet to their main customer base - not as much.
I mean, it also made the first image of a black hole, so there's that part.
I'd also flag that you shouldn't use one of these to do basic sums, but in fairness the corporate shills are so desperate to find a sellable application that they've been pushing that sort of use super hard, so on that one I blame them.
It did? How?
Machine learning tech is used in all sorts of data analysis and image refining.
https://physics.aps.org/articles/v16/63
I get that all this stuff is being sold as a Google search replacement, but a) it is not, and b) it is actually useful, when used correctly.
This is why the term “AI” sucks so much. Even “machine learning” is kind of misleading.
Large-scale statistical computing obviously has uses, especially for subjects that lend themselves well to statistical analysis of large and varied data sets, like astronomical observations.
Sticking all of the text on the internet into a blender and expecting the resulting statistical weights to produce some kind of oracle is… Well, exactly what you’d expect the tech cultists to pivot to after crypto fell apart, tbh, but still incredibly dumb.
Calling them both “AI” does a tremendous disservice to us all. But here we are, unable to escape the marketing.
Yeah, it's no oracle. But it IS fascinating how well it does language, and how close it sticks to plausible answers. It has uses, like narrowing down fuzzy queries, translation and other looser things that traditional algorithms struggle with.
It's definitely not a search engine or a calculator, though.