this post was submitted on 21 Nov 2023
995 points (97.9% liked)
Technology
58303 readers
8 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.
My boss now believes he can "program too" because he let's ChatGPT write scripts for him that more often than not are poor bs.
He also enters chunks of our code into ChatGPT when we issue bugs or aren't finished with everything in 5 minutes as some kind of "Gotcha moment", ignoring that the solutions he then provides don't work.
Too many people see LLMs as authorities they just aren't....
It bugs me how easily people (a) trust the accuracy of the output of ChatGPT, (b) feel like it's somehow safe to use output in commercial applications or to place output under their own license, as if the open issues of copyright aren't a ten-ton liability hanging over their head, and (c) feed sensitive data into ChatGPT, as if OpenAI isn't going to log that interaction and train future models on it.
I have played around a bit, but I simply am not carefree/careless or am too uptight (pick your interpretation) to use it for anything serious.
This is more a 'human' problem than an 'AI' problem.
In general it's weird as heck that the industry is full force going into chatbots as a search replacement.
Like, that was a neat demo for a low hanging fruit usecase, but it's pretty damn far from the ideal production application of it given that the tech isn't actually memorizing facts and when it gets things right it's a "wow, this is impressive because it really shouldn't be doing a good job at this."
Meanwhile nearly no one is publicly discussing their use as classifiers, which is where the current state of the tech is a slam dunk.
Overall, the past few years have opened my eyes to just how broken human thinking is, not as much the limitations of neural networks.