I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Let me ask chatgpt what I think about this
Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.
To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I've started using kagi and I'm trying to be more intentional about it but after a bit of searching it's often easier to just ask claude
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.
As you mentioned tho, not really specific to LLMs at all
Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.
If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.
It's such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.
Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.
Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.
I completely agree - I personally love that there’s so many Open Source AI tools out there.
The scary part is (similar to what we experienced with DeepSeek’s web interface) that its extremely easy for these corporations to manipulate, or censor information.
I should have clarified my concern - I believe we need to revisit critical thinking as a society (whole other topic) and especially so when it comes to tools like this.
Ensuring everyone using it, is aware of what it does, its flaws, how to process its output, and its potential for abuse. Similar to internet safety training for kids in the mid-2000s.
Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.
Is that it?
One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).
Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...
As usual, it is not the AI tool who could fuck our critical thinking but ourselves.
I love how they chose the term "hallucinate" instead of saying it fails or screws up.
It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.
Copilot told me you're wrong and that I can't play with you anymore.
Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).
I've only used it to write cover letters for me. I tried to also use it to write some code but it would just cycle through the same 5 wrong solutions it could think of, telling me "I've fixed the problem now"
I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums
Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.
Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn't seem to have measured people's critical thinking skills. It measured how the subjects felt about their skills. People who feel like they're good at a job might not feel as adequate when their job changes to evaluating someone else's work. The study said the subjects felt that they used their analytical skills less when they had confidence in the AI. The same thing happens when you get a human assistant - as your confidence in their work grows you scrutinize it less. But that doesn't mean you yourself become less skillful. The title saying use of AI "kills" critical thinking skill isn't justified, and is very clickbaity IMO.
I use it to write code for me sometimes, saving me remembering the different syntax and syntactic sugar when I hop between languages. And I use to answer questions about things I wonder - it always provides references. So far it's been quite useful. And for all that people bitch and piss and cry giant crocodile tears while gnashing their teeth - I quite enjoy Apple AI. It's summaries have been amazing and even scarily accurate. No, it doesn't mean Siri's good now, but the rest of it's pretty amazing.