this post was submitted on 06 Aug 2023
1780 points (98.6% liked)

Programmer Humor

32410 readers
406 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] h_a_r_u_k_i 17 points 1 year ago (21 children)

It's sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call time.Now(), but it 'll be a different story.

[–] [email protected] 40 points 1 year ago (20 children)

if you ask it today's date, it actually does that.

It just doesn't have any actual knowledge of what it's saying. I asked it a programming question as well, and each time it would make up a class that doesn't exist, I'd tell it it doesn't exist, and it would go "You are correct, that class was deprecated in {old version}". It wasn't. I checked. It knows what the excuses look like in the training data, and just apes them.

It spouts convincing sounding bullshit and hopes you don't call it out. It's actually surprisingly human in that regard.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (11 children)

Bard is kind of trash though. GPT-4 tends to so much better in my experience.

[–] [email protected] 3 points 1 year ago (1 children)

they are both shit at adding and subtracting numbers, dates and whatnot... they both cant do basic math unfortunately

[–] [email protected] 6 points 1 year ago (2 children)

It's a language model, I don't know why you would expect math. Tell it to output code to perform the math, that'll work just fine.

[–] [email protected] 2 points 1 year ago (1 children)

Then it should say so instead of attempting and failing at the one thing computers are supposed to be better than us at

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Well, if I try to use Photoshop to calculate a polynomial it's not gonna work all that well either, right tool for the job and all.

The fact that LLMs are terrible at knowing what they don't know should be well known by now (ironically).

[–] [email protected] 1 points 1 year ago

And if Photoshop had a way to ask it for such, it'd be a mistake.

Gpt thinking it knows something and hallucinating is ultimatelya bug, not a feature, no matter what the apologists say

[–] [email protected] 0 points 1 year ago (2 children)

I know. It's still baffling how much it messes up when adding two numbers.

[–] [email protected] 1 points 1 year ago (1 children)

I just asked GPT-4:

What's 7 * 8 divided by 10, to the power of 3?

Its reply:

Let's break this down step by step:

First, multiply 7 and 8 to get 56.

Then, divide 56 by 10 to get 5.6.

Finally, raise 5.6 to the power of 3 (5.6 * 5.6 * 5.6) to get 175.616.

So, 7 * 8 divided by 10, to the power of 3 equals 175.616

[–] [email protected] 1 points 1 year ago

It's pretty hit or miss though... I've had lots of good calculations with the odd wrong one sprinkled in, making it unreliable for doing maths. Mostly because it presents the result with absolute certainty.

[–] [email protected] 1 points 1 year ago

It's not baffling at all... It's a language model, not a math robot. It's designed to write English sentences, not to solve math problems.

load more comments (9 replies)
load more comments (17 replies)
load more comments (17 replies)