this post was submitted on 24 Mar 2024
23 points (92.6% liked)

Rust

6011 readers
2 users here now

Welcome to the Rust community! This is a place to discuss about the Rust programming language.

Wormhole

[email protected]

Credits

  • The icon is a modified version of the official rust logo (changing the colors to a gradient and black background)

founded 1 year ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] BB_C 10 points 7 months ago (1 children)

Forgot to mention, and this is tangentially related to my comments from yesterday:

A paper from 2020 showed that Cranelift was an order of magnitude faster than LLVM, while producing code that was approximately twice as slow on some benchmarks. Cranelift was still slower than the paper's authors' custom copy-and-patch JIT compiler, however.

Cranelift is itself written in Rust, making it possible to use as a benchmark to compare itself to LLVM. A full debug build of Cranelift itself using the Cranelift backend took 29.6 seconds on my computer, compared to 37.5 with LLVM (a reduction in wall-clock time of 20%).

Notes:

  • It's easy to gloss over the "order of magnitude" part in the presence of concrete and current numbers mentioned later.
  • It's actually "orders of magnitude" faster.

But the numbers only show a 20% speed increase!

The unattended reader will be left with the impression that Cranelift compiles 20% faster for a 2x slowdown. Some comments below the article confirms that.

What the article author missed (again) is that the biggest Cranelift wins come when used in release/optimized/multi-pass mode. I mention multi-pass because the author should have noticed that the (relatively old) 2020 research paper he linked to tested Cranelift twice, with one mode having the single-pass tag attached to it.

Any Rust user knows that slow builds (sometimes boringly so) are actually release builds. These are the builds where the slowness of LLVM optimizing passes is felt. And these are the builds where Cranelift shines, and is indeed orders of magnitude faster than LLVM.

The fact that Cranelift manages to build non-optimized binaries 20% faster than LLVM is actually impressively good for Cranelift, or impressively bad for LLVM, however you want to look at it.

And that is the problem with researches/authors with no direct field expertise. They can easily miss some very relevant subtleties, leading the readers to make grossly wrong conclusions.

[–] [email protected] 1 points 7 months ago

Yeah, I'm no compiler engineer, but testing both release and debug builds is the minimum I'd do. That doesn't even get into classes of optimizations, like loop unrolling, binary size, macros, or function inlining, which I also expect to be in a compiler comparison.

[–] BB_C 9 points 8 months ago (3 children)

Users can now use Cranelift as the code-generation backend for debug builds of projects written in Rust

Didn't read the rest. But this is clearly inaccurate, as most Rustaceans probably already know.

Cranelift can be used in release builds. The performance is not competitive with LLVM. But some projects are completely useless (too slow) when built with the debug profile. So, some of us use a special release profile where Cranelift backend is used, and debug symbols are not stripped. This way, one can enjoy a quicker edit/compile/debug cycle with usable, if not the best, performance in built binaries.

[–] [email protected] 3 points 7 months ago (1 children)

Another option is to compile dependencies with LLVM and optimizations, which will likely be done only once in the first clean build, and then compile your main binary with Cranelift, thus getting the juicy fast compile times without having to worry about the slow dependencies.

[–] BB_C 1 points 7 months ago

Yes. And to complete the pro tips, the choice of linker can be very relevant. Using mold would come recommended nowadays.

[–] [email protected] 1 points 7 months ago (1 children)

So that “special release build” is the build you do debugging with. Shouldn't you just modify the otherwise useless debug profile and turn on all the optimizations necessary to make it usable?

[–] BB_C 2 points 7 months ago

Well, obviously that will depend on what defaults (and how many?!) a developer is going to change".

https://doc.rust-lang.org/cargo/reference/profiles.html#default-profiles

And the debug (dev) profile has its uses. It's just not necessarily the best for typical day-to-day development in many projects.

I actually use two steps of profile inheritance, with -cl (for Cranelift) inheriting from a special release-dev profile. A developer does not have to be limited in how they modify or construct their profiles.

[–] Vorpal 0 points 8 months ago (1 children)

Please, send an email to [email protected] to report this issue to them, they usually fix things quickly.

[–] BB_C 5 points 8 months ago

I read the rest of the article, and it appears to have been partially written before support for codegen backends landing in cargo.

The latest progress report from bjorn3 includes additional details on how to configure Cargo to use the new backend by default, without an elaborate command-line dance.

That "latest progress report" has the relevant info ;)

So, basically, you would add this to the top of Cargo.toml:

cargo-features = ["codegen-backend"]

Then add a custom profile, for example:

[profile.release-dev-cl]
inherits = "release"
lto = "off"
debug = "full"
codegen-backend = "cranelift"

Then build with:

cargo build --profile release-dev-cl