this post was submitted on 28 Mar 2024
247 points (94.3% liked)
Rust
6024 readers
1 users here now
Welcome to the Rust community! This is a place to discuss about the Rust programming language.
Wormhole
Credits
- The icon is a modified version of the official rust logo (changing the colors to a gradient and black background)
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There are of course macros, but they're kind of a pain to use. Zigs
comptime fn
are really nice and a similar concept. Rust does haveconst fn
but of course those come with limits on them.You kind of get that with Rust for free. You get implicit GC for anything stack allocated, and technically heap allocated values are deterministically freed which you can work out by tracking their ownership. As soon as the owning scope exits it will be freed. If you want more explicit control you can always invoke
std::mem::drop
to force it to be freed immediately, but generally you don't gain much by doing so.Some really great work is being done on that pretty much all the time but... yeah, I can't reasonably argue that the Rust compiler is fast. Taking full advantage of incremental compilation helps a lot, but if you're doing a clean build, better grab a coffee.
What would be nice is if cargo explored a similar solution to what Arch Linux used, where there's a repository of pre-compiled libraries for various platforms and configurations that can be used to speed up build times. That of course does come with a whole heap of problems though, probably the biggest of which is that it's a HUGE security nightmare. Of lesser concern is the fact that they could not realistically do so for every possible combination of features or platforms, so it would likely only apply to crates built with the default features for a small subset of the most popular platforms. I'm also not sure what the tree shaking would end up looking like in a situation like that.
Yup, and Rust's macros are pretty cool, but in D you can just do:
There's a whole compile-time reflection library as well, so you can take a class and make a super-optimized serialization/deserialization library if you want. It's super cool, and I built a compile-time JSON library just because I could...
Yup, Rust is awesome.
But in D you can do explicit scope guards:
scope(exit)
- basically Go'sdefer()
scope(success)
- only runs when no exceptions are runscope(failure)
- only runs when there's an exceptionI didn't use them much, but they are really cool, so you can do explicit cleanup as you go through the logic flow, but defer them until they're needed.
It's a neat alternative to RAII, which D also supports.
I still need to try out Cranelift, which was posted here recently. Cranelift release mode could mostly solve this for me.
That said, I haven't touched D in years since moving to Rust, so I obviously find more value in it. But I do miss some of the candy.
Hmm... that is interesting.
scope(exit)
is basically just an inlinestd::ops::Drop
trait, I actually think it's a bad thing that you can mix that randomly into your code as you go instead of collecting all of the cleanup actions into a single function. Reasoning about what happens when something gets dropped seems much more straightforward in the Rust case. For instance it wasn't immediately clear that those statements get evaluated in reverse order from how they're encountered which is something I assumed, but had to check the documentation to verify.scope(success)
andscope(failure)
are far more interesting as I'm not aware of a direct equivalent in Rust. There's the nightly only feature ofstd::ops::Try
that's somewhat close to that, but not exactly the same. Once again though, I'm not convinced letting you sprinkle these statements throughout the code is actually a good idea.Ultimately, while it is interesting, I'm actually happy Rust doesn't have that feature in it. It seems like somewhat of a nightmare to debug and something ripe to end up as a footgun.
It's a stack, just like Go's
defer()
.Probably because Rust doesn't have exceptions, and I'm pretty sure there are no guarantees with
panic!()
.Same, but that's because Rust's semantics are different. It's nice to have the option if RAII isn't what you want for some reason (it usually is), but I absolutely won't champion it since it just adds bloat to the language for something that can be solved another way.
Well, it has something semantically equivalent while being more explicit, which is
Result
(just likeOption
is the semantic equivalent ofnull
).I actually do quite a bit of bare metal Rust work so I'm pretty familiar with this. There are sort of guarantees with panic. You can customize the panic behavior with a
panic_handler
function, and you can also somewhat control stack unwinding during a panic usingstd::panic::catch_unwind
. The later requires that anything returned from it implement theUnwindSafe
trait which is sort of like a combinationSend + Sync
. That said, Rust very much does not want you to regularly rely on stack unwinding. Anything that's possible to recover from should useResult
rather thanpanic!()
to signal a failure state.Yup. My point is just that
scope(failure)
could be problematic because of the way Rust works with error handling.What could maybe be cool is D's in/out contracts (example pulled from here):
The
scope(failure)
could partially be solved with theout
contract. I also don't use this (I find it verbose and distracting), but maybe that line of thinking could be an interesting way to generically handle errors.Hmm... I think the Rust-y answer to that problem is the same as the Haskell-y answer, "Use the Types!". I.E. in the example above instead of returning an
i32
you'd return aNonZero<u32>
, and your args would bea: &NonZero<u32>, b: u32
. Basically make invalid state unrepresentable and then you don't need to worry about the API being used wrong.I'm more referring to a more general application, such as:
That gives you some of the
scope(failure)
behavior, without as many footguns. Basically, it would desugar to:I'm not proposing this syntax, just suggesting that something along these lines may be interesting.
I think the issue with that is that it's a little bit of a solution in search of a problem. Your example of:
isn't really superior in any meaningful way (and is arguably worse in some ways) to:
For more complicated error handling the various functions on Result probably have all the bases covered.
For what it's worth a lot of my day to day professional work is actually in Java and our code base has adopted various practices inspired by Rust and Haskell. We completely eliminated null from our code and use Optional everywhere and use a compile time static analysis tool to validate that. As for exception handling, we're using the Reactor framework which provides a type very similar to Result, and we essentially never directly throw or catch exceptions any more, it's all handled with the functions Reactor provides for error handling.
I just don't think the potential footguns introduced by
null
andexception
s are worth it, the safer type level abstractions ofOption
andResult
are essentially superior to them in every way.Nice. We use Python and use
None
everywhere. I ranpyright
on our codebase, and while we use typing religiously, our largest microservice has ~6k typing errors, most of which are uncheckedNone
s. We also use exceptions quite a bit, which sucks (one thing really annoys me is a function likecheck_permissions()
which returns nothing, and throws if there's an issue, but it could totally just return abool
. We have nonsense like that everywhere.I use Rust for all of my personal projects and love not having to deal with null everywhere. I'd push harder for it at work if others were interested, but I'm the only one who seems passionate about it (about 2-3 are "interested," but haven't even done the tutorial).
Yeah as far as I'm concerned
null
is public enemy number one. I refuse to work in any language that doesn't allow me to indicate in some fashion that a variable is non-nullable. I just about had an aneurysm when I found out that JavaScript not only hasnull
, but alsonil
andundefined
and they all mean something subtly different. To be fair though, JavaScript is like a greatest hits of bad language design.JavaScript doesn't have
nil
, but it hasnull
,NaN
andundefined
.But yeah, wrapping
null
in an Option is a really nice.It sort of has nil. While a type can be
null
orundefined
when evaluated,nil
is used in many of the JS libraries and frameworks to mean something that is eithernull
orundefined
. So you'll see functions likefunction isNil(value) { return value == null || value == undefined }
and they'll sometimes often confuse things even more be actually defining anil
value that's just an alias fornull
which is just pointlessly confusing.As an aside, basically every language under the sun has
NaN
as it's part of the IEEE floating point standard. JavaScript just confuses the situation more than most because it's weakly typed so it doesn't differentiate between integers, floats, or some other type like an array, string, or object. Hence anything in JS can be a NaN even though it really only has meaning for a floating point value.function isNil(value)
We instead have
function isNullOrUndefined(value) ...
instead, but it does the same thing.It's especially lame since you can't just do
if (!value) ...
since that includes 0 (but not[]
or{}
, which Python considers falsey). It's remarkably inconsistent...Yup, but you can use
NotNan
in Rust, just like yourNonNull
example.And yeah, it's weird that JavaScript doesn't have an integer type, everything is just floating point all the way down. I actually did some bitwise logic with JavaScript (wrote a tar implementation for the web), and you get into weird situations where you need to
>>> 0
in order to get an unsigned 32-bit integer (e.g.(1 << 31) >>> 0
). Those hacks really shouldn't be necessary...Because it's floating point it also causes some REALLY strange bounds on integers. The maximum sized int you can safely store in JS is a 53 bit integer. That caused us all kinds of headaches when we tried to serialize a 64 bit integer and it started producing garbage results for very large values.