this post was submitted on 11 May 2025
757 points (97.7% liked)

Programmer Humor

23190 readers
1191 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 42 minutes ago

Good luck with your 256 characters.

[–] [email protected] 3 points 1 hour ago

I remember the first time I ran out of inodes: it was very confusing. You just start getting ENOSPC, but du still says you have half the disk space available.

[–] [email protected] 31 points 5 hours ago (3 children)

You want real infinite storage space? Here you go: https://github.com/philipl/pifs

[–] [email protected] 3 points 2 hours ago* (last edited 2 hours ago)

that's awesome! I'm just migrating all my data to πfs. finally mathematics is put to a proper use!

[–] [email protected] 1 points 1 hour ago

Breakthrough vibes

[–] [email protected] 37 points 7 hours ago (7 children)

I had a manager once tell me during a casual conversation with complete sincerity that one day with advancements in compression algorithms we could get any file down to a single bit. I really didn't know what to say to that level of absurdity. I just nodded.

[–] [email protected] 2 points 26 minutes ago

It's an interesting question, though. How far CAN you compress? At some point you've extracted every information contained and increased the density to a maximum about - but what is that density?

[–] [email protected] 17 points 3 hours ago* (last edited 3 hours ago)

That's the kind of manager that also tells you that you just lack creativity and vision if you tell them that it's not possible. They also post regularly on LinkedIn

[–] [email protected] 7 points 3 hours ago

u can have everthing in a single bit, if the decompressor includes the whole universe

[–] [email protected] 7 points 3 hours ago

Send him your work: 1 (or 0 ofc)

[–] [email protected] 3 points 4 hours ago

Just make a file system that maps each file name to 2 files. The 0 file and the 1 file.

Now with just a filename and 1 bit, you can have any file! The file is just 1 bit. It's the filesystems that needs more than that.

[–] [email protected] 6 points 7 hours ago

That’s precisely when you bet on it.

[–] [email protected] 1 points 5 hours ago

Let me guess, over 30 years old.

[–] [email protected] 15 points 11 hours ago (1 children)

It's like that chip tune webpage where the entire track is encoded in the url.

[–] [email protected] 9 points 9 hours ago (2 children)
[–] [email protected] 8 points 7 hours ago

Are you trying to get rickrolled?

[–] [email protected] 146 points 22 hours ago (1 children)

If you have a tub full of water and a take a sip, you still have a tub full of water. Therefore only drink in small sips and you will have infinite water.

Water shortage is a scam.

[–] [email protected] 13 points 19 hours ago (3 children)

There is a water shortage?

[–] [email protected] 19 points 19 hours ago
load more comments (2 replies)
[–] [email protected] 35 points 19 hours ago* (last edited 19 hours ago) (3 children)

Stupid BUT: making the font in LibreOffice bigger saves space. so having 11 is readible but by changing the font size to like 500 it can save some mb per page
I dont know how it works, i just noticed it at some point

Edit: i think it was kb, not mb

[–] [email protected] 10 points 7 hours ago

Have a macro that decreases all font size on opening and then increases all again before closing.

Follow me irl for more compression techniques.

[–] [email protected] 17 points 10 hours ago

per page

I mean, yes. obviously.

If you had 1000 bytes of text on 1 page before, you now have 1byte per page on 1000 pages afterwards

[–] [email protected] 6 points 11 hours ago

You could always diff the XML before and after to see what's causing it.

[–] [email protected] 40 points 21 hours ago (5 children)
[–] [email protected] 1 points 3 hours ago

Nice read, thanks!

[–] [email protected] 2 points 7 hours ago (1 children)

I was sort of on Mike Goldman (the challenge giver)'s side until I saw the great point made at the end that the entire challenge was akin to a bar room bet; Goldman had always set it up as a kind of scam from the start and was clearly more than happy to take $100 from anyone who fell for it, and so should have taken responsibility when someone managed to meet the wording of his challenge.

[–] [email protected] 1 points 2 hours ago

Yeah, he was bamboozled as soon as he agreed to allow multiple separate files. The challenge was bs from the start, but he could have at least nailed it down with more explicit language and by forbidding any exceptions. I think it's kind of ironic that the instructions for a challenge related to different representations of information failed themselves to actually convey the intended information.

[–] ulterno 4 points 13 hours ago

Nice stuff.

I got sold on the :

EOF does not consume less space than "5"

because, even though the space taken by the filesystem is the fault of the filesystem, one needs to consider the minimum information requirements of stating starts and ends of files, specially when stuff is split into multiple files.

I would have actually considered the file size information as part of the file size instead (for both the input and the output) because, for a binary file, which can include a string of bits which might match an EOF, causing a falsely ended file, would be a problem. And as such, the contestant didn't go checking for character == EOF, but used the function that truly tells whether the end of file is reached, which would, then be using the file system's file size information.

Since the input file was a 3145728 bytes and the output files would have been smaller than that, I would go with 22 bits to store the file size information. This would be in favour of the contestant as:

  1. That would be the minimum (hyh) number of bits required to store the file size, making it as easy as possible for the contestant to make more files
  2. You could actually go with 2 bits, if you predefine MiB to be the unit, but that would make it harder for the contestant, because they will be unable to present file sizes less than 1 MiB, and would have to increase the file size information bits

On the other hand, had the contestant decided to break the file between bits (instead at byte ends), instead of bytes (which, from the code, I think they didn't) the file size information would require an additional 3 bits.


Now, using this logic, if I check the result:

From the result claimed by the contestant, there were 44 extra bytes (352 bits) remaining.

+ 22 bits for the input file size information - 22*219 bits for the output file size information because 219 files

so the contestant succeeds by 352 + 22 − (22 × 219) = −4444 bits. In other words, fails by 4444 bits.

Now of course, the output file size information might be representable in a smaller number of bits, but to calculate that, I would require downloading the file (which I am not in the mood for.
And in that case, you would require additional information to tell the file size bits. So;

  • 5 bits for the number 22 in the input
  • 5 bits for the size of the file size information (I am feeling this won't give significant gains) and rest of the bits as stated in the first 5 bits, as the file size bits
    • you waste bits for every file size requiring more than 16 bits to store the file size information
    • it is possible to get a net gain with this, as qalc says, log(3145728 / 219, 2) = (ln(1048576) − ln(73)) / ln(2) ≈ 13.81017544

But even then, you have 352 + 5 + 22 − (5 + (14 × 219)) = −2692 for the best case scenario in which all output file sizes manage to be under 14 bits of file size informations. More realistically, it would be something around 352 + 5 + 22 − ((5 + 14) × 219) = −3782 because you will the the 5 bits for every file, separately, with the 14 in this case, be a changing value for every file, giving a possibly smaller number.


If instead going with the naive 8 bit EOF that the offerer desired, well, going with 2 consecutive characters instead of a single one, seems doable. As long as you are able to find enough of said 2 characters.
After going on a little google search, I seem to think that in a 3MiB file, there would be either 47 or 383 (depending upon which of my formulae was correct) possible occurrences of the same 2 character combination. Well, you'd need to find the correct combination.

But of course, that's not exactly compression for a binary file, as I said before, as an EOF is not good enough.

load more comments (2 replies)
[–] [email protected] 76 points 23 hours ago (1 children)

It's all fun and games until your computer turns into a black hole because there is too much information in too little of a volume.

[–] [email protected] 36 points 22 hours ago (4 children)

Even better! According to no hiding theorem, you can't destroy information. With black holes you maybe possibly could be able to recover the data as it leaks through the Hawking radiation.
Perfect for long term storage

[–] [email protected] 27 points 19 hours ago (1 children)

Can't wait to hear news about a major site leaking user passwords through hawking radiation.

[–] [email protected] 5 points 16 hours ago

i love this comment

load more comments (3 replies)
[–] [email protected] 112 points 1 day ago* (last edited 1 day ago) (2 children)

Awesome idea. In base 64 to deal with all the funky characters.

It will be really nice to browse this filesystem...

[–] [email protected] 83 points 1 day ago

The design is very human

load more comments (1 replies)
[–] [email protected] 91 points 1 day ago (3 children)

Broke: file names have a max character length.

Woke: split b64-encoded data into numbered parts and add .part-1..n suffix to each file name.

[–] [email protected] 2 points 11 hours ago* (last edited 11 hours ago)

Browse your own machine as if it's under alt.film.binaries but more so

[–] [email protected] 9 points 18 hours ago

I'd go with a prefix, so it's ls-friendly.

load more comments (1 replies)
load more comments
view more: next ›