this post was submitted on 15 Sep 2024
19 points (100.0% liked)
Data Structures and Algorithms
172 readers
1 users here now
A community dedicated to topics related to data structures and algorithms.
founded 8 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why do you believe that?
I already mentioned why! It's common pitfall. For example, try a large HTTP/2 transfer over a socket where
TCP_NODELAY
is not set (or rather, explicitly unset), and see how the transfer rate would be limited because of it.The only think that
TCP_NODELAY
does is disabling packet batching/merging through Naggle's algorithm. Supposedly that increases throughput by reducing the volume of redundant information required to send small data payloads in individual packets, with the tradeoff of higher latency. It's a tradeoff between latency and throughput. I don't see any reason for transfer rates to lower; quite the opposite. In fact the very few benchmarks I saw showed exactly that:TCP_NODELAY
causing a drop in the transfer rate.There are also articles on the cargo cult behind
TCP_NODELAY
.But feel free to show your data.
You clearly have no idea what ping-pong protocol means.
Okay, so can you explain?
I specifically mentioned HTTP/2 because it should have been easy for everyone to both test and find the relevant info.
But anyway, here is a short explanation, and the curl-library thread where the issue was first encountered.
You should also find plenty of blog posts where "unexplainable delay"/"unexplainable slowness"/"something is stuck" is in the premise, and then after a lot of story development and "suspense", the big reveal comes that it was Nagle's fault.
As with many things TCP. A technique that may have been useful once, ends up proving to be counterproductive when used with modern protocols, workflows, and networks.