this post was submitted on 05 Aug 2023
541 points (92.5% liked)

Programmer Humor

19515 readers
1507 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 105 points 1 year ago (5 children)

Makes sense, cause double can represent way bigger numbers than integers.

[–] lysdexic 33 points 1 year ago (1 children)

Also, double can and does in fact represent integers exactly.

[–] [email protected] 20 points 1 year ago (1 children)

Only to 2^54. The amount of integers representable by a long is more. But it can and does represent every int value correctly

[–] [email protected] -1 points 1 year ago (2 children)

*long long, if we're gonna be taking about C types. A long is commonly limited to 32 bits.

[–] [email protected] 16 points 1 year ago

C is irrelevant because this post is about Java and in Java long is 64 bits.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

you should never be using these types in c anyway, (u?)int(8/16/32/64)_t are way more sane

[–] [email protected] 27 points 1 year ago

Also because if you are dealing with a double, then you're probably dealing with multiple, or doing math that may produce a double. So returning a double just saves some effort.

[–] [email protected] 9 points 1 year ago

Yeah it makes sense to me. You can always cast it if you want an int that bad. Hell just wrap the whole function with your own if it means that much to you

(Not you, but like a hypothetical person)

[–] [email protected] 1 points 1 year ago (1 children)

How does that work? Is it just because double uses more bits? I'd imagine for the same number of bits, you can store more ints than doubles (assuming you want the ints to be exact values).

[–] [email protected] 3 points 1 year ago (1 children)
[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (6 children)

No, I get that. I'm sure the programming language design people know what they are doing. I just can't grasp how a double (which has to use at least 1 bit to represent whether or not there is a fractional component) can possibly store more exact integer vales than an integer type of the same length (same number of bits).

It just seems to violate some law of information theory to my novice mind.

[–] [email protected] 8 points 1 year ago (1 children)

It doesn't. A double is a 64 bit value while an integer is 32 bit. A long is a 64 bit signed integer which stores more exact integer numbers than a double.

[–] LeFantome 1 points 1 year ago* (last edited 1 year ago)

Technically, a double stores most integers exactly ( up until a certain value ) and then approximations of integers of much larger sizes. A long stores all its integers exactly but cannot handle values nearly as large.

For most real world data ranges, they are both going to store integers exactly.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

It doesn't store more values bit for bit, but it can store larger values.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I don't think that's possible. Representing more exact ints means representing larger ints and vice versa. I'm ignoring signed vs. unsigned here as in theory both the double and int/long can be signed or unsigned.

Edit: ok, I take this back. I guess you can represent larger values as long as you are ok that they will be estimates. Ie, double of N (for some very large N) will equal double of N + 1.

[–] [email protected] 1 points 1 year ago

Oh now I get what you mean, and like others mentioned, yeah it's more bits :)

[–] [email protected] 1 points 1 year ago (1 children)

I would need to look into the exact difference of double vs integer to know, but a partially educated guess is that they are referring to Int32 vs double and not Int64, aka long. I did a small search and saw that double uses 32 bits for the whole numbers and the others for the decimal.

[–] [email protected] 1 points 1 year ago (1 children)

Yeah, that was my guess too. But that just means they could return a long (or whatever the 64 bit int equivalent in java is) instead of an int.

[–] [email protected] 1 points 1 year ago

Okay, so I dug in a bit deeper. Doubles are standardized as a 64 bit bundle that is divided into 1 signed bit, 11 exponetioal bits and 52 bits for decimal. It's quite interesting. As to how it works indepth, I probably will try to analyze a bit conversion if I can try something

[–] towerful 1 points 1 year ago

I'm going to guess here (cause I feel this community is for learning)....
Integers have exactness. Doubles have range.
So if MAX_INT + 1 is possible, then ~(MAX_INT + 1) is probably preferable to an overflow or silent MIN_INT.

But Math.ceil probably expects a float, because it is dealing with decimals (or similar). If it was an int, rounding wouldn't be required.
So if Math.ceil returned and integer, then it could parse a float larger than INT_MAX, which would overflow an int (so error, or overflow). Or just return a float