this post was submitted on 07 Sep 2023
62 points (94.3% liked)

Programming

17487 readers
117 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

There was a time where this debate was bigger. It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder. This compromise makes it easier for the maintainers of the tools / languages, but does take away choice from the user / developer. But maybe that's not important? What are your thoughts?

you are viewing a single comment's thread
view the rest of the comments
[–] robinm 2 points 1 year ago

I think you don't understand what @CasualTee said. Of course dynamic linking works, but only when properly used. And in practice dynamic linking in a few order of magnitude more complex to use than static linking. Of course you still have ABI issue when you statically link pre-compiled libraries but in practice in statically linked workflow you are usually building the library yourself removing all ABI issues. Of course if a library is using a global and you statically linked it two times (with 2 differents versions) you will have an issue, but at least you can easily check that a single version is linked.

There are no problems other than versioning and version conflicts, and even that is a solved problem.

If it was solved, “DLL hell” wouldn't be a common expression and docker would have never been invented.

You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library.

This statement makes no sense at all. Undefined behavior is just behavior that the C++ standard intentionally did not imposed restrictions upon by leaving the behavior without a definition. Implementations can and do fill in the blanks.

@CasualTree was talking specically of UB related to dynamic linking and whitch simply do not exists when statically linking.

Yes dynamic linking work in theory, but in practice it's hell to make it work properly. And what advantage does it have compare to static linking?

  • Less RAM usage? That not even guaranteel because static linking allow aggressive inlining, constant propagation, LTO and other fun optimisation
  • Easier dependencies upgrade? That's mostly true for C, assuming you have perfect backward ABI compatibility. And nothing proves you that your binary is really compatible with newer versions of its libraries. And staticdependencies ungrade are an issue only because most Linux distribution don't have a workflow in witch updating a dependancy triggers the rebuil of all dependant binaries. If it was done it would then just be a question of download speed. Given the popularity of tools like docker who effectively tranforms dynamic linking into the equivalent of statically linking since all dependencies' versions are known, I would say that a lot of people prefer the confort of static linking.

To sum-up, are all the complications introduced specifically introduced by dynamic linking compared to static linking worth it for a non-guaranteed gain in RAM, a change in the tools of Linux maintainors and extra download time?