this post was submitted on 31 Oct 2023
13 points (100.0% liked)
Programming
13376 readers
1 users here now
All things programming and coding related. Subcommunity of Technology.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I really don't get the article. It's not the compiler's purpose to prevent logic errors nor does it do that properly. Trying to overcomplicate your types to the degree where they prevent a few of them at the cost of making your code less flexible concerning potential future issues doesn't sound like a good idea either.
What's wrong with tests? Just write tests to see if your code does what it's expected to do and leave the compiler for what it's made for.
Why would you have to choose between tests and compiler checks? You can have both. The more you have the less chance of finding bugs.
I would also add that tests cannot possibly be exhaustive. I am thinking in particular of concurrency problems - even with fuzzing you can still come across special cases where it goes wrong because you forgot a mutex somewhere. Extra static checks are complementary to tests.
I think you can write "unsafe" code in Rust that bypass most of the extra checks so you do have the flexibility if you really need it.
My favorite tests are the ones I don't need to remember to write. I haven't needed to write a test for what happens when a function receives
null
in a while thanks to TS/mypy/C#'s nullable reference types/Rust'sOption
/etc. Similarly, I generally don't need to write tests for functions receiving the wrong type of values (strings vs numbers, for example), and with Rust, I generally don't even need to write tests for things like thread safety and sometimes even invalid states (since usually valid states can be represented by enum variants, and it's often impossible to have an invalid state because of that).There is a point where it becomes too much, though. While I'd like it if the compiler ensured arbitrary preconditions like "x will always be between 2 and 4", I can't imagine what kinds of constraints that'd impose on actually writing the code in order to enforce that. Rust does have
NonZero*
types, but those are checked at runtime, not compile time.There are techniques like abstract interpretation that can deduce lower and upper bounds that a value can take. I know there is an analysis in LLVM called ValueAnalysis that does that too - the compiler can use it to help dead code elimination (deducing that a given branch will never be taken because the value will never satisfy the condition so you can get rid of the branch).
But I think these techniques do not work in all use cases. Although you could theoretically invent some syntax to say "I would like this value to be in that range", the compiler would not be able to tell in all cases whether it's satisfied.
If you are interested in a language that has subrange checks at runtime, Ada language can do that. But it does come at a performance cost - if your program is compute bound it can be a problem