this post was submitted on 06 Aug 2023
1780 points (98.6% liked)

Programmer Humor

32410 readers
344 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 92 points 1 year ago (2 children)

It's not just every tech company, it's every company. And it's terrifying - it's like giving people who don't know how to ride a bike a 1000hp motorcycle! The industry does not have guardrails in place and the public consciousness "chatGPT can do it" without any thought to checking the output is horrifying.

[–] szczuroarturo 9 points 1 year ago

Basically the internet.

load more comments (1 replies)
[–] [email protected] 68 points 1 year ago (2 children)
[–] h_a_r_u_k_i 17 points 1 year ago (1 children)

It's sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call time.Now(), but it 'll be a different story.

[–] [email protected] 40 points 1 year ago (15 children)

if you ask it today's date, it actually does that.

It just doesn't have any actual knowledge of what it's saying. I asked it a programming question as well, and each time it would make up a class that doesn't exist, I'd tell it it doesn't exist, and it would go "You are correct, that class was deprecated in {old version}". It wasn't. I checked. It knows what the excuses look like in the training data, and just apes them.

It spouts convincing sounding bullshit and hopes you don't call it out. It's actually surprisingly human in that regard.

[–] [email protected] 20 points 1 year ago (1 children)

It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

Oh great, Silicon Valley's AI is just an overconfident intern!

load more comments (1 replies)
[–] [email protected] 10 points 1 year ago (1 children)

It’s super weird that it would attempt to give a time duration at all, and then get it wrong.

[–] [email protected] 12 points 1 year ago (3 children)

It doesn't know what it's doing. It doesn't understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.

load more comments (3 replies)
load more comments (13 replies)
load more comments (1 replies)
[–] [email protected] 64 points 1 year ago (11 children)

There's even rumours that the next version of Windows is going to inject a bunch of AI buzzword stuff into the operating system. Like, how is that going to make the user experience any more intuitive? Sounds like you're just going to have to fight an overconfident ChatGPT wannabe that thinks it knows what you want to do better than you do, every time you try opening a program or saving a document.

[–] [email protected] 56 points 1 year ago

This is what pisses me off about the whole endeavour. We can't even get a fucking search algo right any more, why the fuck do i want a machine blithely failing to do what it's told as it stumbles off a cliff.

[–] [email protected] 41 points 1 year ago

It'll be like if they brought clippy back but only this time hes even more of an asshole and now he can fuck up your OS too.

[–] [email protected] 11 points 1 year ago (2 children)

There’s even rumours

Like, I know we all love to hate Microsoft here but can we stop with the random nonsense? That's not what's happening, at all.

load more comments (2 replies)
[–] [email protected] 8 points 1 year ago (1 children)

Windows Co-pilot just popped up on my Windows 11 machine. Its disclaimer said it could provide surprising results. I asked it what kind of surprising results I could expect, it responded that it wasn't comfortable talking about that subject and ended the conversation.

load more comments (1 replies)
load more comments (7 replies)
[–] [email protected] 53 points 1 year ago

Coupled with laying off a few thousand employees

[–] [email protected] 52 points 1 year ago (1 children)

My cousin got a new TV and I was helping to set it up for him. During the setup thing, it had an option to enable AI enhanced audio and visuals. Turning the ai audio on turned the decent, but maybe a little sub par audio, into an absolute garbage shitshow it sounded like the audio was being passed through an "underwater" filter then transmitted through a tin can and string telephone. Idk who decided this was a feature that was ready to be added to consumer products but it was absolutely moronic

[–] [email protected] 47 points 1 year ago (19 children)

None of it is even AI, Predicting desired text output isn't intelligence

[–] [email protected] 29 points 1 year ago (3 children)

You hold artificial intelligence to the standards of general artificial intelligence, which doesn't even exist yet. Even dumb decision trees are considered an AI. You have to lower your expectations. Calling the best AIs we have dumb is unhelpful at best.

load more comments (3 replies)
[–] [email protected] 27 points 1 year ago (4 children)

At this point i just interpret AI to be "we have lots of select statements and inner joins "

load more comments (4 replies)
[–] [email protected] 15 points 1 year ago (2 children)

I do agree, but on the other hand...

What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?

[–] [email protected] 15 points 1 year ago

I've seen this argument so many times and it makes zero sense to me. I don't think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don't think "I just said predicting, what's the next likely word to come after it". That's not even remotely similar to how I think at all.

load more comments (1 replies)
[–] Noughmad 11 points 1 year ago (1 children)

AI is whatever machines can't do yet.

Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn't AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became "machine learning" or "model".

load more comments (1 replies)
load more comments (15 replies)
[–] [email protected] 47 points 1 year ago* (last edited 1 year ago) (5 children)

Before this is was blockchain, and before that it was "AI", and before that...

[–] [email protected] 19 points 1 year ago (1 children)

Before that self driving cars, before that "Big data", before that 3D printing, before that internet TV, before that "cloud computing", before that web 2.0, before that WAP maybe, internet in general?

Some of those things did turn out to be game changers, others not at all or not so much. It's hard to predict the future.

load more comments (1 replies)
[–] [email protected] 10 points 1 year ago

IOT? Don't worry. Edge AI is now AIOT (AI IOT)

load more comments (3 replies)
[–] [email protected] 33 points 1 year ago (1 children)

If it ain't broke, we'll break it!

[–] [email protected] 7 points 1 year ago

We'll make it broken!

[–] ICastFist 28 points 1 year ago (3 children)

Unlike the previous bullshit they threw everywhere (3D screens, NFTs, metaverse), AI bullshit seems very likely to stay, as it is actually proving useful, if with questionable results... Or rather, questionable everything.

[–] [email protected] 11 points 1 year ago (14 children)

if it only were AI and not just llms, machine learning or just plain algorithms. but yeah let's call everything AI from here on. NFTs could be useful if used as proof of ownership instead of expensive pictures etc

load more comments (14 replies)
[–] [email protected] 10 points 1 year ago

As a counter to your example, this is my career's third AI hype cycle.

load more comments (1 replies)
[–] [email protected] 28 points 1 year ago (1 children)

This is refreshing to see. I thought I was the only one who felt this way.

[–] [email protected] 26 points 1 year ago (1 children)

It's all so stupid. The entire stock market basically took off because Nvidia CEO mentioned AI like 50 times and everyone now thinks it's worth 200 times it's yearly profit.

We don't even have AI, we have language models that dig through text and create answers from that.

[–] [email protected] 7 points 1 year ago (7 children)

That's a massive oversimplification. We do have AI. We don't have AGI.

load more comments (7 replies)
[–] [email protected] 27 points 1 year ago

God it’s exhausting. Okay, I’ll buy a 3d television if that’s what I have to do, let’s bring that back instead. Please?

[–] [email protected] 23 points 1 year ago (1 children)

If you take out the AI part it still holds true. 2023 is full of bullshit.

load more comments (1 replies)
[–] [email protected] 19 points 1 year ago* (last edited 1 year ago) (1 children)

I'm bookmarking this for the next time my supervisor plugs ChatGPT.

[–] [email protected] 8 points 1 year ago

I had a manager tell me some stuff was being scanned by AI for one of my projects.

No, you are having it scanned by a regular program to generate keyword clouds that can be used to pull it up when humans type their stupidly-worded questions into our search. It’s not fucking AI. Stop saying everything that happens on a computer that you don’t understand is fucking AI!

I’m just so over it. But at least they aren’t trying to convince us chatGPT is useful (it definitely wouldn’t be for what they would try to use it for)

[–] [email protected] 18 points 1 year ago (3 children)

What companies are you people working for?

We are being asked not to use AI.

[–] [email protected] 17 points 1 year ago (3 children)

Ain't gotta use it to sell it or slap AI stickers on top of whatever products you're selling

load more comments (3 replies)
[–] [email protected] 9 points 1 year ago

Not surprising for North Korea

[–] [email protected] 8 points 1 year ago

Larger companies have been working fast to sandbox the models used by their employees. Once they are safe from spilling data they go all in. I'm currently on a platform team enabling generative Ai capabilities at my company.

[–] [email protected] 17 points 1 year ago

It begs the question... what's the boardroom random bullshit timeline?

When was it random cloud bullshit go and when was it random Blockchain bullshit go, and what other buzzwords almost guaranteed Silicon Valley tech bros tossed money in your face and at what point in time were they applicable?

[–] [email protected] 15 points 1 year ago

Snapchat AI. My friends don't want it, they can't block it, and it is proven to lie about certain things, like asking if it has one's location.

[–] [email protected] 13 points 1 year ago

More Ads and tracking systems, Now With AI!

Commercial...

load more comments
view more: next ›