this post was submitted on 01 Dec 2023
48 points (98.0% liked)

Learn Programming

1642 readers
1 users here now

Posting Etiquette

  1. Ask the main part of your question in the title. This should be concise but informative.

  2. Provide everything up front. Don't make people fish for more details in the comments. Provide background information and examples.

  3. Be present for follow up questions. Don't ask for help and run away. Stick around to answer questions and provide more details.

  4. Ask about the problem you're trying to solve. Don't focus too much on debugging your exact solution, as you may be going down the wrong path. Include as much information as you can about what you ultimately are trying to achieve. See more on this here: https://xyproblem.info/

Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

It's about asking, "how does this algorithm behave when the number of elements is significantly large compared to when the number of elements is orders of magnitude larger?"

Big O notation is useless for smaller sets of data. Sometimes it's worse than useless, it's misguiding. This is because Big O is only an estimate of asymptotic behavior. An algorithm that is O(n^2) can be faster than one that's O(n log n) for smaller sets of data (which contradicts the table below) if the O(n log n) algorithm has significant computational overhead and doesn't start behaving as estimated by its Big O classification until after that overhead is consumed.

#computerscience

Image Alt Text:

"A graph of Big O notation time complexity functions with Number of Elements on the x-axis and Operations(Time) on the y-axis.

Lines on the graph represent Big O functions which are are overplayed onto color coded regions where colors represent quality from Excellent to Horrible

Functions on the graph:
O(1): constant - Excellent/Best - Green
O(log n): logarithmic - Good/Excellent - Green
O(n): linear time - Fair - Yellow
O(n * log n): log linear - Bad - Orange
O(n^2): quadratic - Horrible - Red
O(n^3): cubic - Horrible (Not shown)
O(2^n): exponential - Horrible - Red
O(n!): factorial - Horrible/Worst - Red"

Source

you are viewing a single comment's thread
view the rest of the comments
[–] noli 11 points 1 year ago

While I get your point, you're still slightly misguided.

Sometimes for a smaller dataset an algorithm with worse asymptomatic complexity can be faster.

Some examples:

  • Radix sort's complexity is linear. Then why would most people still want to use e.g. quicksort? Because for relatively smaller datasets, the overhead of radix sort overpowers the gain from being asymptotically faster.
  • One of the most common and well-known optimizations for quicksort is to switch over to insertion sort when subarray sizes become smaller than a certain size. This is because for small datasets (I'm talking e.g. 10 elements) insertion sort is just objectively faster.

Big O notation only considers the largest factor. It is still important to consider the lower order factors in some cases. Assume the theoretical time complexity for an algorithm A is 2nlog(n) + 999999999n and for algorithm B it is n^2 + 7n. Clearly with a small n, B will always be faster, even though B is O(n^2) and A is O(nlog(n)).

Sorting is actually a great example to show how you should always consider what your data looks like before deciding which algorithm to use, which is one of the biggest takeaways I had from my data structures & algorithms class.

This youtube channel also has a fairly nice three-part series on the topic of sorting algorithms: https://youtu.be/_KhZ7F-jOlI?si=7o0Ub7bn8Y9g1fDx