What you say describes my experience 10 to 15 years ago, not my experience today. Compare the settings dialog in KDE Plasma to the windows settings dialog for instance. Or should I say myriad of Windows settings dialogues.
QuadriLiteral
What was difficult in your experience?
Huh odd, I guess it depends quite heavily on the system? Just to check I cleaned my build folder and am building now, ~700 files that take around 5 minutes to compile. I don't notice a thing, CPU (Ryzen 7 7700X ) is fully maxed out. I know that I do notice it on my laptop, but there reducing from 16 to 12 or even 14 is enough. Having to reduce to 4 is very different from what I experience. Currently on manjaro, the laptop has ubuntu.
If you don't want compilation to take all cores, use one or two cores less for the compile. I frequently compile C++ code, almost always I just let it max out 100%, haven't been really bothered by the lag. When I'm in a teams meeting for instance it can cause noticable lag so then I do ninja -n 8
or ninja -n 12
and problem solved.
Cross-platform and performant, are there options besides C++ and rust?
I was very surprised yesterday to find out that Unreal Engine now offers native linux builds as well as linux targets. Works flawlessly too. So with all the hate linux seems to be getting from them from what you read in the occasional blog post, they must have devs working only on this support.
Turns out you were the hacker all along
Note that the scope of "New Circle" is much bigger than "just" memory safety: choice types, pattern matching, ...
Which problems did you experienced?
ccache folder size started becoming huge. And it just didn't speed up the project builds, I don't remember the details of why.
This might be the reason ccache only went so far in your projects. Precompiled headers either prevent ccache from working, or require additional tweaks to get around them.
Right, that might have been the reason.
To each its own, but with C++ projects the only way to not stumble upon lengthy build times is by only working with trivial projects. Incremental builds help blunt the pain but that only goes so far.
When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come. Compilation of everything from scratch was an hour at the end. Switching to lld was a huge win, as well as going from 12 to compilation 24 threads. The code-base in a way you don't need to build everything to work on a specific part, using dynamically loaded libraries to inject functionality in the main app.
I was a linux dev there, the pch's worked, not as well as for MSVC where they made a HUGE difference. Otoh lld blows the microsoft linker out of the water, clean builds were faster on msvc, incremental faster on linux.
I've had mixed results with ccache myself, ending up not using it. Compilation times are much less of a problem for me than they were before, because of the increases in processor power and number of threads. This together with pchs and judicously forward declaring and including only what you use.
From the times Circle surfaces in discussions, I think I remember reading it's it not being open source that is holding back adoption? Not sure, anyway as a C++ dev i'd love to see one of the different approaches to fundamentally improving C++ take widespread hold.
Wonder how much of this relates to SUSE? How "normie-tolerant" is that? I've been printing for years without any issues for instance, and have a HP printer that used to hate my linux OS with a passion.