this post was submitted on 25 Jan 2024
330 points (97.1% liked)
Asklemmy
43850 readers
900 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I wonder if it might be the specific type of work that you do that allows for this. I don't pay for ChatGPT, so I wouldn't know the quality of the code it outputs with GPT-4, but I personally wouldn't blindly trust any code that comes out of it regardless, meaning I'd have to read through and understand all the generated code (do you save time by skipping this part maybe?), and reading code always takes longer and is overall more difficult than writing it. On top of that, the actual coding part only accounts for a small fraction of the work I do. So much of it is spend deciding what to code in order to reach a certain end goal, and a good chunk of the coding (in my case at least) is for things that are much easier to describe with code than words. So I'm still finding it hard to imagine how you could possibly get anything more than a 1.5x output improvement.
The main time savings I've found with generative AI is in writing boilerplate code, documentation, or writing code for a domain that I'm intimately familiar with since those are very easy to skim over and immediately know if the output is good or not.
I actually got curious about it specifically because of this thread, and earlier today did a little experimentation with Copilot's Cmd-I feature as compared with copying and pasting to GPT. I'm actually pretty convinced now that the issue is that Copilot using a cheaper model for reasons of computational cost. Giving Copilot the exact same task I was giving to GPT, it struggled to create code that could even compile, even after multiple rounds of me trying to help it, where GPT-4 was able to just give output and its output worked.
I think the assumption that it's being set up under is that people will be doing a ton of queries throughout the work day, more so than the average GPT-4 user will type into the chat interface, and so they can't realistically do all that computation on people's behalf for $20/month.
(Edit: And this page makes some statements about "priority access" to GPT-4, indicating that they're throttling access to the more capable models depending on demand.)
In practice, the majority of the time I'm carefully looking over diffs anyway before committing anything, since as you mentioned the vast majority of work time is spent modifying existing code. So the times it messes up aren't a real serious issue. But again I think (after some pretty minimal experimentation today) that the real issue you're seeing is just that GPT-4 is way more capable at this stuff than is GPT-3.5 / Copilot.
But this is guessing based on some pretty minimal experimentation with it. I sounded real confident in my initial statement but now that I'm looking at it maybe that's not warranted.