this post was submitted on 15 Sep 2023
466 points (97.2% liked)

Technology

58303 readers
13 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago (2 children)

Yep, I've had plenty of discussion about this on here before. Which was a total waste of time, as idiots don't listen to facts. They also just keep moving the goal posts.

One developer was like they use AI to do their job all the time, so I asked them how that works. Yeah, they "just" have to throw 20% of the garbage away that's obviously wrong when writing small scripts, then it's great!

Or another one who said AI is the only way for them to write code, because their main issue is getting the syntax right (dyslexic). When I told them that the syntax and actually writing the code is the easiest part of my job they shot back that they don't care, they are going to continue "looking like a miracle worker" due to having AI spit out their scripts..

And yet another one that discussed at length how you obviously can't magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you'll get bug free code despite reviews. So now they argued: Duh, you give the AI small bite sized Jira tickets to work on, so you can review it! And if the pull request is too long you tell the AI to make a shorter more human readable one! And then we're back to square one: The senior developer reviewing the mess of code could just write it faster and more correct themselves.

It's exhausting how little understanding there is about LLMs and their limitations. They produce a ton of seemingly high quality stuff, but it's never 100% correct.

[–] [email protected] 2 points 1 year ago

It seems to mostly be replacing work that is both repetitive and pointless. I have it writing my contract letters, ‘executive white papers’, and proposals.

The contract letters I can use without edit. The white papers I need to usually redirect it, but the second or third output is good. The proposals it functionally does the job I’d have a co-op do… put stuff on paper so I can realize why it isn’t right, and then write to that. (For the ‘fluffy’ parts of engineering proposals, like the cover letters, I can also use it.)

[–] [email protected] 1 points 1 year ago (1 children)

And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews.

Arguably this is comparing apples and oranges here. I agree with you that code reviews aren't going to be useful for evaluating a big code dump with no context. But I'd also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.

The AI hype is definitely hype, but there's enough truth there to justify some of the hand-wringing. The guy who told you he only has to throw away the 20% of the code that's useless is still getting 100% of his work done with maybe 40% of the effort (i.e., very little effort to generate the first AI cut, 20% to figure out the stupid stuff, 20% to fix it). That's a big enough impact to have significant ripples.

Might not matter. It seems like the way it's going to go in the short term is that paranoia and economic populism are going to kill the whole thing anyway. We're just going to effectively make it illegal to train on data. I think that's both a mistake and a gross misrepresentation of things like copyright, but it seems like the way we're headed.

[–] [email protected] 1 points 1 year ago

Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.

That's not totally true. Even if a developer throws a massive pull request dump at you, there is a high chance the dev at least ran the program locally and tried it out (at least the happy path).

With AI the code might not even compile. Or it looks good at first glance, but has a disastrous bug in the logic (that is extremely easy to overlook).

As with most code: Writing it takes me maybe 10% of the time, if even that. The main problem is finding the right solution, covering edge cases and so on. And then you spend 190% of the time trying to fix a sneaky bug that got into the code, just because someone didn't think of a certain case or didn't pay attention. If AI throws 99% correct code at me it would probably take me longer to properly fix it than to just write it myself from scratch.