this post was submitted on 07 Jul 2024
20 points (95.5% liked)

Rust

6009 readers
5 users here now

Welcome to the Rust community! This is a place to discuss about the Rust programming language.

Wormhole

[email protected]

Credits

  • The icon is a modified version of the official rust logo (changing the colors to a gradient and black background)

founded 1 year ago
MODERATORS
 

Hi all,

ref: https://programming.dev/post/16349359

I agree with all the criticism about the author, the intentions, and the points in the article, expressed in the ref. thread. I also think the author highlights a serious issue (if we leave "selling the book", cheap criticism, and sensationalism aside). While nothing new for most developers, the article has spawned a personal rabbit hole of discovery, starting from supply chain attacks.

I am still very early in my process of learning Rust (still reading The Book) and self-taught software engineering in general, and the journey the article has spawned was very educational to me. I've learned about securing software and being mindful across the whole SDLC[1], AppSec, DevSec, OWASP, SLSA[2] Socket[3], GitHub Advanced Security, and many more tools and guidelines. Last of which is RustSec[4]. Which quenched my thirst and closed that personal rabbit hole. It has opened a different can of worms though.

While endemic to any non-monolithic ecosystem and only part of the "big security picture", supply chain is possibly the major player across the spectrum. Comparable to "the legacy issue" of stagnating systems and infrastructures, open to exploits as old as the Sun.

Now, while I am aware that security is a process, not a product and that this is easier said than done: I wonder if tools like RustSec should be embraced at the foundational level and made a "mandatory best practice". RustSec tools integrate with an up to date security advisory database and Cargo. They can also be deployed as GitHub Actions.

Because I am sure this is not all roses: I agree that (for example) dependabot is seen as a major annoyance more than a useful tools for a number of reasons, and that RustSec could spark the same kind of thoughts. However, it could be a great stepping stone of the security process.

I am aware I may be being too idealistic here, but the process has to start from somewhere and stagnating on "dogmas" ain't helping either.

Please be kind in your replies.

Cheers

[1] https://www.youtube.com/watch?v=hDvz8KivY_U [2] https://slsa.dev [3] https://socket.dev [4] https://rustsec.org

top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 14 points 4 months ago (1 children)

It's a good idea to be aware of any security advisories of your projects dependencies, but it's also equally important to be aware of your actual attack surface and audience. It for instance may not matter to your entirely offline and utterly unprivileged app that there's an arbitrary code execution flaw in one of your dependencies because any theoretical attacker is the user themself and they would only be executing code they already had the capability to execute. On the other hand such a flaw in other circumstances could be absolutely critical. It's really down to you as the author of the code to evaluate any security advisories through the lens of your codes expected use cases.

[–] [email protected] 5 points 4 months ago (1 children)

Yup, our webapp has a bunch of security advisories in our NPM packages, but we only use node.js for the build step, so most are completely irrelevant since they only matter in a server context. It's valuable to keep the alerts to a minimum so we don't miss something important (e.g. an XSS vulnerability), but it's not critical.

[–] [email protected] 10 points 4 months ago (1 children)

Yeah, our security team once flagged our app for having a SQL injection vulnerability in one of our dependencies. We told them we weren't going to do anything about it. They got really mad and set up a meeting with one of the executives apparently planning to publicly chew us out.

We get there, they give the explanation about major security vulnerability that we're ignoring, etc. After they said their bit we asked them how they had come to the conclusion we had a SQL injection. Explanation was about what you'd expect, they scanned our dependencies and one of the libraries had a security advisory. We then explained that there were two problems with their findings. First, we don't use SQL anywhere in our app, so there's no conceivable way we could have a SQL injection vulnerability. Second our app didn't have a database or data storage of any kind, we only made RESTful web requests, so even if there was some kind of injection vulnerability (which there wasn't) it would still be sanitized by the services we were calling. That was the last time they even bothered arguing with us when we told them we were ignoring one of their findings.

[–] [email protected] 4 points 4 months ago (1 children)

I would say this very issue is at the core of the current CVE discussions that leads more and more projects to become their own CNAs.

Security people and corporate downstream consumers of security feeds want to invest the minimum of effort while pushing as much of the evaluation what is and isn't a vulnerability on the authors of library authors as possible. However, this does not work. A vulnerability can only ever truly be evaluated by investing significant amounts of effort in the abstract way that is required to do it in an upstream project. On the other hand, at point of use it is often trivial to discard the possibility of an exploit because the potentially vulnerable code is not even used by the project using the library that contains the code.

[–] [email protected] 2 points 4 months ago (1 children)

It's an interesting point but I think it kind of confuses two different but related concepts. From the perspective of the library author a vulnerability is a vulnerability and needs to be fixed. From the perspective of the library consumer a vulnerability may or may not be an issue depending on a lot of factors. In some ways severity exists in the wrong place, as it's really the consumer that needs to decide the severity not the library.

A CVE without a severity score I think is fine. Including the list of CWEs that a particular CVE is composed of I think is useful as well. But CVE should not include a severity score because there really isn't a single severity but a range of severities depending on specific usage. At best the severity score of a CVE represents a worst case scenario not even an average case, nevermind the case for a specific project.

[–] [email protected] 2 points 4 months ago

From the perspective of a library author even evaluating if a given bug could be considered a vulnerability is extra effort that is not strictly useful to the project itself, just to those using it who want to not apply every single update.