(For context, I'm basically referring to Python 3.12 "multiprocessing.Pool Vs. concurrent.futures.ThreadPoolExecutor"...)
Today I read that multiple cores (parallelism) help in CPU bound operations. Meanwhile, multiple threads (concurrency) is due when the tasks are I/O bound.
Is this correct? Anyone cares to elaborate for me?
At least from a theorethical standpoint. Of course, many real work has a mix of both, and I'd better start with profiling where the bottlenecks really are.
If serves of anything having a concrete "algorithm". Let's say, I have a function that applies a map-reduce strategy reading data chunks from a file on disk, and I'm computing some averages from these data, and saving to a new file.
Gaming on linux is not that bad today. Look at valve's success with the steamdeck. All past and present issues are because developers were aiming to a different platform, with proprietary tooling.
In meme terms, is like Fender made the guitars for people with different hands. You can't play, but it's their fault.