this post was submitted on 09 Dec 2023
32 points (97.1% liked)

Python

6288 readers
7 users here now

Welcome to the Python community on the programming.dev Lemmy instance!

๐Ÿ“… Events

PastNovember 2023

October 2023

July 2023

August 2023

September 2023

๐Ÿ Python project:
๐Ÿ’“ Python Community:
โœจ Python Ecosystem:
๐ŸŒŒ Fediverse
Communities
Projects
Feeds

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 10 months ago* (last edited 10 months ago)

That's precisely what I was talking about. I basically said it's better to split an application by type of parallelism (CPU bound vs I/O bound) than to mix them.

An I/O heavy service benefits from having lots of available threads mapped to a smaller number of CPU cores, whereas a calculation heavy service benefits from pinning threads to cores to limit context switching. So scaling each will be quite different.

If you separate them into separate processes (one for I/O and one for compute), it's much easier to scale them separately (more machines and whatnot). If I combine them, I'd need to continually balance how cores are split between concerns, and I wouldn't have as much control over the types of cores (I/O is happy with lots of generic cores, whereas compute would benefit from specialized instructions).

So that's my practical application of the "separate thread pools" idea, splitting thread pools at the process boundary is usually useful as an application grows in complexity. This increases latency, but it enables other types of tuning.