this post was submitted on 15 Sep 2024
895 points (98.3% liked)

Technology

58303 readers
10 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 66 points 2 months ago (6 children)

As stated in the article, this has less to do with using AI, more to do with sloppy code reviews and code quality enforcement. Bad code from AI is just the latest version of mindlessly pasting from Stack Overflow.

I encourage jrs to use tools such as Phind for solving problems but I also expect them to understand what they’re submitting and be ready to defend it no differently to any other PR. If they’re submitting code they don’t understand that’s incredibly unprofessional and I would come down very hard on them. They don’t do this though because we don’t hire dickheads.

[–] [email protected] 21 points 2 months ago* (last edited 2 months ago)

Shift-left eliminated the QA role.

Now we have AI generated shit code, with devs that don't understand the low level details of both the language, and the specifics of the generated code.

So we basically have content entry (ai inputs) and extremely shitty QA bundled into the "developer" role.

As a 20 year veteran of the industry, people keep asking me if I think AI will make developers obsolete. I keep telling them "maybe some day, but today's LLMs are not it. The AI bubble is going to burst, and a few legit use cases will make it through"

[–] [email protected] 14 points 2 months ago* (last edited 2 months ago)

Bad code from AI is just the latest version of mindlessly pasting from Stack Overflow.

Humans literally can not scan all of SO to make a huge copypasta.

It takes much more time, effort, and thought to find various solutions on SO and patch them together into something that works well.

[–] [email protected] 13 points 2 months ago* (last edited 2 months ago)

Yeah but... i asked chatgpt once how to style something in asciidoctors style.yml. It proposed me html syntax (some inline stuff can be done with html tags in asciidoctor, if output is html). After the usual apology, it suggested some wrong yaml. Third try, because formatting was wrong, it mixed them both.

I mean, sure, some niche usecase in a somewhat obscure (lots of moving parts) lightweight markup. But still, this was a lesson.

[–] [email protected] 9 points 2 months ago (1 children)

this has less to do with using AI, more to do with sloppy code reviews and code quality enforcement.

They are the same picture.

[–] [email protected] 3 points 2 months ago

More specifically: the same kind of decision makers are behind both.

[–] [email protected] 4 points 2 months ago

We used to have these shit developers and I accepted a lot of bad code back then -- if it actually worked -- because otherwise "code review" is full-on training, which is an entire other job from the one I was hired to do.

The client ditched that contracting firm, and the devs I work with now are worth putting in time on code review with -- but damn, we got hella shit code in our codebase to deal with now. Some of it got tossed, some of it ... we live with.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

Computer write shite code and the human still gets blamed.

edit: we have become gods

[–] [email protected] 13 points 2 months ago

The human turned the code in. They deserve 100% of the blame.