219
"Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
(deadsimpletech.com)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
It's takes like this that just discredit the rest of the text.
You can dislike LLM AI for its environmental impact or questionable interpretation of fair use when it comes to intellectual property. But pretending it's actually useless just makes someone seem like they aren't dissimilar to a Drama YouTuber jumping in on whatever the latest on-trend thing to hate is.
Let's be real here: when people hear the word AI or LLM they don't think of any of the applications of ML that you might slap the label "potentially useful" on (notwithstanding the fact that many of them also are in a all-that-glitters-is-not-gold--kinda situation). The first thing that comes to mind for almost everyone is shitty autoplag like ChatGPT which is also what the author explicitly mentions.
I'm saying ChatGPT is not useless.
I'm a senior software engineer and I make use of it several times a week either directly or via things built on top of it. Yes you can't trust it will be perfect, but I can't trust a junior engineer to be perfect either—code review is something I've done long before AI and will continue to do long into the future.
I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower. If it was useless this would not be the case.
~~Senior software engineer~~ programmer here. I have had to tell coworkers "don't trust anything chat-gpt tells you about text encoding" after it made something up about text encoding.
ah but did you tell them in CP437 or something fancy (like any text encoding after 1996)? 🤨🤨🥹
Sadly all my best text encoding stories would make me identifiable to coworkers so I can't share them here. Because there's been some funny stuff over the years. Wait where did I go wrong that I have multiple text encoding stories?
That said I mostly just deal with normal stuff like UTF-8, UTF-16, Latin1, and ASCII.
My favourite was a junior dev who was like, "when I read from this input file the data is weirdly mangled and unreadable so as the first processing step I'll just remove all null bytes, which seems to leave me with ASCII text."
(It was UTF-16.)
You've got to make sure you're not over-specializing. I'd recommend trying to roll your own time zone library next.