this post was submitted on 24 Apr 2024
61 points (98.4% liked)

Programmer Humor

19648 readers
2651 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 

Hey folks! I think this request is right up this comm's alley. I'm sure that we all know bogo sort but, what other terrible/terribly inefficient algorithms, software architecture, or design choices have you been horrified/amused by?

I, sadly, lost a great page of competing terrible sorting algorithms, but I'll lead with JDSL as a terrible (and terribly inefficient) software architecture and design. The TL;DR is that a fresh CS guy got an internship at a company that based its software offering around a custom, DSL based on JSON that used a svn repo to store all functions in different commits. The poor intern had a bad time due to attempting to add comments to the code, resulting in customer data loss.

top 21 comments
sorted by: hot top controversial new old
[–] [email protected] 34 points 7 months ago (1 children)

No, I won't give you my github.

[–] [email protected] 23 points 7 months ago

That's ok. I've only just met you and am not sure that I'm ready to commit.

[–] [email protected] 25 points 7 months ago (1 children)
[–] [email protected] 9 points 7 months ago

That's incredible.

[–] [email protected] 15 points 7 months ago (1 children)
[–] [email protected] 1 points 7 months ago

great. time to put that in practice.

[–] [email protected] 14 points 7 months ago
[–] sus 12 points 7 months ago* (last edited 7 months ago)

slow inverse square root:

float slowinvsqrt(float x)
{
    const long accuracy = 100000000;    // larger number = better accuracy
    if (x <= 0.0f) {
        return NAN;
    }
    if (x == 1.0f) {
        return 1.0f;
    }
    if (x < 1.0f) {
        return 1.0f / slowinvsqrt(1.0f/x);
    }

    int max_power = log(accuracy) / log(x);
    long pow1 = pow(x, max_power - 1);
    long pow2 = pow(x, max_power);
    double current = 1.0;
    double previous = 1.0;

    for (long i = 0; i<10*accuracy; i++) {
        current = sin(current);
        if (i == pow1) {
            previous = current;
        }
        if (i == pow2) {
            return current / previous;
        }
    }
}
[–] Corbin 8 points 7 months ago

There are subfields of computer science dedicated to this question. A good starting point for the theory would be Pessimal algorithms and simplexity analysis, which lays out two concepts:

  • The time & space simplexity of an algorithm indicates best-case lower bounds on resource usage, and
  • An algorithm is pessimal if no equivalent algorithm wastes more time/space/etc.

For example, common folklore is that sorting has O(n lg n) time complexity, depending on assumptions. In the paper, they give that sorting has Ω(n ** (lg n / 2)) time simplexity; any algorithm which takes more time, like bogosort, must do so through some sort of trickery like non-determinism or wasting time via do-nothing operations.

[–] tatterdemalion 7 points 6 months ago

πfs: The Data-Free Filesystem!

[–] [email protected] 7 points 7 months ago (1 children)
[–] [email protected] 5 points 7 months ago (1 children)

"EvErYtHiNg ShOuLd Be A mIcRo SeRvIcE" --Executive Who Doesn't Have to Maintain Said Microservices

[–] sirdorius 7 points 7 months ago* (last edited 7 months ago) (1 children)

I unironically had a screening interview with a recruiter that asked "If you were creating a startup, would you use microservices?". She didn't like that my answer was "It depends, I don't have enough information to answer".

[–] Redkey 8 points 7 months ago

"If you were making food, would you use onion powder?"

[–] BurningTurtle 6 points 7 months ago (1 children)
[–] yogsototh 3 points 7 months ago (1 children)

Technically, sleep sort is O(n), so faster than the theoretical optimal sorting algorithm O(n.log n) ... not so bad ;)

[–] [email protected] 3 points 7 months ago

Sleep sort is kind of like count sort but with added overhead. Both have a complexity of O(n+m) with n being the length and m the value of the max element.

The optimum of O(n * log n) is based on the assumption that the only information about the values is obtained by comparison. If you operate on integer keys I would recommend radix sort. It has a complexity of O(n * w) where w is the length (read logarithm) of the key.

[–] [email protected] 5 points 7 months ago (1 children)

Sounds like you should look at a few years of https://thedailywtf.com entries. Enough to make the staunchest man (or woman) weep.

[–] Mesa 5 points 6 months ago

https://thedailywtf.com/articles/gotta-catch-em-all

Dear God.

try
{
    /* ... some important code ... */
} 
catch (OutOfMemoryException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (OverflowException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (InvalidCastException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (NullReferenceException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (IndexOutOfRangeException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (ArgumentException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (InvalidOperationException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (XmlException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (IOException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (NotSupportedException exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
catch (Exception exception)
{
    Global.Insert("App.GetSettings;", exception.Message);
}
[–] [email protected] 4 points 7 months ago

I can't find the name/source at the moment, but if you enumerate all turing machines and run them concurrently* you will find the optimal algorithm for your problem in O(1) and executed that.
To my knowledge the algorithm is so inefficient on small input that it takes hours to solve integer addition.

* You run the first turing machine one step, than the first two one additional step, that the first tree... This allows you to run an unlimited amount of TMs an unlimited amount of steps.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)