Yes, popular programs behave correctly most of the time.
But "perfectly fine for the last two decades" would imply a far lower rate of CVEs and general reliability than we actually have in modern software.
Yes, popular programs behave correctly most of the time.
But "perfectly fine for the last two decades" would imply a far lower rate of CVEs and general reliability than we actually have in modern software.
I don't know, but I also don't know how much you think is "enough" to deal with project cultural issues. It sounds like it must be quite a bit?
First and foremost _____ is a giant hack to mitigate legacy mistakes.
Wow, every article on web technology should start this way. And lots of non-web technologies, too.
Take a step back and look at the pile of overengineered yet underthought, inefficient, insecure and complicated crap that we call the modern web....
Think about how many indirections and half-baked abstraction layers are between your code and what actually gets executed.
Think about that, and then...what, exactly? As a website author, you don't control the browser. You don't control the web standards.
I'm extremely sympathetic to this way of thinking, because I completely agree. The web is crap, and we shouldn't be complacent about that. But if you are actually in the position of building or maintaining a website (or any other piece of software), then you need to build on what already exists, unless you're in the exceedingly rare position of being able to near-unilaterally make changes to an existing platform (as Google does with Chrome, or Microsoft and Apple do with their OSes) or to throw out a huge amount of standard infrastructure and start as close to "scratch" as possible (e.g. GNU Hurd, Mill Computing, Oxide, Redox OS, etc; note that several of these are hobby projects not yet ready for "serious" use).
If you think anything in software has worked "perfectly fine for the past two decades", you're probably not looking closely enough.
I exaggerate, but honestly, not much.
Do you mean moonshine_save
? Does it even provide an API for loading that doesn't return a Result
with a possible LoadError?
Rust doesn't generally "throw" errors, it returns them; and generally, function APIs will guide you in the right direction. You generally should not use unwrap()
or expect()
in your finished code (though the unwrap_or...
variants are fine), which means that you must handle all errors the API can return programmatically (possibly just by propagating with ?
, if you will instead be handling the error in the caller).
OP clearly means "preprocessor", not "precompiler". You're right that preprocessing itself isn't slow, but the header/impl split can actually cause some slowness at build time.