this post was submitted on 29 Sep 2024
221 points (95.5% liked)
Fuck AI
1449 readers
293 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 8 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can trigger hallucinations in today's versions of LLMs with this kind of questions. Same with a knife : you can hurt yourself by missusing it ... and in fact you have to be knowledgeable and careful with both.
The knife doesn't insist it won't hurt you, and you can't get cut holding the handle. Comparatively, AI insists it is correct, and you can get false information using it as intended.
I would argue it's not the AI but the companies (that make the AI) making unattainable promises and misleading people.
Are you suggesting the AI would appear spontaneously without those companies existing?
Its the companies that are the problem.
Would these LLMs exist without the companies?
Is being immoral a prerequisite for producing such tech?
One doesn't need to be.. It can be used for useful things .. Unlike what it's used for now
Guns are literally for killing like its all they do. Even for hunting the sole purpose is to kill. That's not the case with LLMs, its just exclusively how these companies are using it as they have all the power to dictate terms in the workplace.
Is it the training process that you take issue with or the usage of the resulting model?
The energy usage is mainly on the training side with LLMs. Generating afterwards is fairly cheap. Maybe what you want is to have fewer companies trying to train their own models from scratch and encourage collaborating instead?
Indeed. Though what we should be thinking about is not just the cost in absolute terms, but in relation to the benefit. GPT-4 is one of the more expensive models to run right now, and you can accomplish very good results with their smaller GPT-4o mini at 0.5% of the energy cost^[1]^. That's the cost of running 0.07 LED bulbs over an hour, or running 1 LED bulb over 0.07 hours (i.e. 5min). If that saves you 5min of time writing an email while the room is lit with a single LED bulb and your computer is drawing energy, that might just be worth it, right?
[1] Estimated by using https://huggingface.co/spaces/genai-impact/ecologits-calculator and the pricing difference between GPT-4o, 4o mini, and 3.5 (https://openai.com/api/pricing/). The assumption I'm making is that the total hardware and energy cost scales linearly with the API pricing.
Yeah, they operate very opaquely, so we can't know the true cost, but based on what I can know with certainty given models I can run on my own machines, the numbers seem reasonable. In any case, that's not really relevant to this discussion. Treat it as a hypothetical, then work out the math later to figure out where we want to be and what threshold we should be setting.
It sounds like you don't like how LLMs are currently used, not their power consumption.
I agree that they're a dead end. But I also don't think they need much improvement over what we currently have. We just need to stop jamming them where they don't belong and leave them be where they shine.
Weren't you just telling me that the environmental cost has no impact on your stance?
I don't agree with that. If you use it to destroy human creativity, sure that will be the outcome. Or you can use it to write boring ass work emails that you have to write. You could use it to automate boring tasks. Or a company can come along and automate creativity badly.
Capitalism is what's ruining it. Capitalism is what is ruining culture, creativity, and the human experience more than LLMs. LLMs are just a knife and instead of making tasty food we are going around stabbing people.
and yeah people made guns just to put holes in pieces of paper, sure nothing else. If you do not know how LLMs work, just say so. There are enough that are trained on public data which do not siphon human creativity.
It is doing a lot of harm to human culture, but that's more of how it's being used, and it needs real constructive criticism instead of simply being obtuse.
Sure, that's exactly what I believe .. Wow I'm so called out. I use it as a tool to do boring menial tasks so that I can spend my time on more meaningful things, like spending time with my family, making some dinner, spend time on the parts of my work I enjoy and automate the boring tedious parts, like writing boilerplate code that's slightly different based on context.
Can you elaborate on how and the mechanisms by which this is happening as you see? Why do you see it that way? Do you not see any circumstances in which it could be useful? Like legitimately useful? Like have you not written a stupid tedious email to someone you didn't like that you couldn't be bothered to put more than 2 seconds to prompt it to some one or thing else to deal with it for you?
This is true it's starting to eat its own tail. That also doesn't mean all new models are using new data. It could also be using better architectures on the same data. But yes using ai generated data to train new ai is bad and you'll end up creating nerfed less useful model that will probably hallucinate more. Doesn't mean the tech isn't useful cause you've not seen it used for anything good.
Sounds like all your problems are with capitalism and not LLMs but you can't see that.
And good for you that you're in a position to not deal with bullshit in your work. Not everyone has that luxury.
Get some empathy for people in different circumstances as you. You sound like a child.
Also there's a fuck ton of useful training data with permissive licenses. Also, fuck copyright law. It's been weaponized by capitalists to control our lives. Especially cause the artists barely gets theirs.
We're never gonna see eye to eye so don't bother. Peace and love. Have a good day.
And it’s the fault of crazy kids that school shootings happen. And absolutely nothing else.
/s
can't wait for gun companies to start advertising their guns as "intelligent" and "highly safe"
Maybe ChatGPT should find a way to physically harm users when it hallucinates? Maybe then they'd learn.
Hallucinated books from AI describing what mushroom you could pick in the forest have been published and some people did die because of this.
We have to be careful when using a.i. !