this post was submitted on 10 Feb 2025
139 points (91.1% liked)
Technology
62073 readers
5093 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sorry but no.
It’s good when what you are trying to do has been done in the past by thousand of people (thanks to the free training data). But it’s really bad for new use case. After all it's a glorified and expensive auto-complete tool trained on code they parsed. It’s not magic, it’s math.
But you don’t get intelligence, creativity from these tools. It’s math! Math is the least creative domain on earth. Since when being a programmer is just typing portion of code from boilerplate / examples from internet?
It’s the logical thinking, taking into account all the parameters and constraints, breaking problems into piece of code, checking it, testing it, deploying it, supporting it.
Ok, programming goal is to solve a problem. But usually not all the parameters of the problem can be reduced to its mathematical form.
IA are far from being able to do that and the ratio gain/cost is not proven at all. These companies are so committed to AI (in term of money invested) that THEY MUST make you use their AI products, whatever its quality. They even use a marketing term to hide their product bad answer: hallucinations. Hallucination is just a fancy word to not say: totally wrong.
Do you find normal to buy a solution that never produces 100% good results (more around 20% of failure)?
In my industry, this IA trend (pushed mainly from managers not knowing what really is programming and of course "AI") generate a lot of bad quality code from our junior devs. And it's not something i want to push in production.
In fact, a lot of PoC around ML never goes from the R&D phase to the real production. It’s too risky for the business (as human life could be impacted).