IAm_A_Complete_Idiot

joined 1 year ago
[–] [email protected] 6 points 8 months ago

No, rust is stricter because you need to think a lot more about whether weird edge cases in your unsafe code can potentially cause UB. For ex. If your data structure relies on the Ord interface (which gives you comparison operators and total ordering), and someone implements Ord wrong, you aren't allowed to commit UB still. In C++ land I'd venture to guess most any developer won't care - that's a bug with your code and not the data structure.

It's also more strict because rusts referencing rules are a lot harder then C's, since they're all effectively restrict by default, and just turning a pointer into a reference for a little bit to call a function means that you have to abide by those restrictions now without the help of the compiler.

[–] [email protected] 2 points 8 months ago

For context for other readers: this is referring to NAT64. NAT64 maps the entire IPv4 address space to an IPv6 subnet (typically 64:ff9b). The router (which has an IPv4 address) drops the IPv6 prefix and does a normal IPv4 NAT from there. After that, you forward back the response over v6.

This lets IPv6 hosts reach the IPv4 internet, and let you run v6 only internally (unlike dual stack which requires all hosts having v6 and v4).

[–] [email protected] 2 points 9 months ago

You can do rollbacks if you're using something like home-manager on a foreign distribution. It's just a bit more janky admittedly.

[–] [email protected] 3 points 9 months ago

There's a transaction fee, the higher you pay the more priority you have (since miners get a cut).

[–] [email protected] 5 points 9 months ago* (last edited 9 months ago) (1 children)

The vulnerability has nothing to do with accidentally logging sensitive information, but crafting a special payload to be logged which gets glibc to write memory it isn't supposed to write into because it didn't allocate memory properly. glibc goes too far outside of the scope of its allocation and writes into other memory regions, which an attacked could carefully hand craft to look how they want.

Other languages wouldn't have this issue because

  1. they wouldn't willy nilly allocate a pointer directly like this, but rather make a safer abstraction type on top (like a C++ vector), and

  2. they'd have bounds checking when the compiler can't prove you can go outside of valid memory regions. (Manually calling .at() in C++, or even better - using a language like rust which makes bounds checks default and unchecked access be opt in with a special method).

Edit: C's bad security is well known - it's the primary motivator for introducing rust into the kernel. Google / Microsoft both report 70% of their security vulnerabilities come from C specific issues, curl maintainer talks about how they use different sanitizers and best practices and still run into the same issues, and even ubiquitous and security critical libraries and tools like sudo + polkit suffer from them regularly.

[–] [email protected] 1 points 9 months ago

The solution here generally afaik is to give a specific deadline before you go public. It forces the other party to either patch it, or see the problem happen when they go live. 90 days is the standard timeframe for that since it's enough time to patch and rollout, but still puts pressure on making it happen.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

It's not complicated until your reputation drops for a multitude of reasons, many not even directly your fault.

Neighboring bad acting IPs, too many automated emails sent out while you were testing, compromised account, or pretty much any number of things means everyone on your domain is hosed. And email is critical.

[–] [email protected] 7 points 11 months ago (1 children)

git commit -a --amend

[–] [email protected] 4 points 11 months ago (1 children)

Not in this one, iirc they actually reverse engineered and were working off of apple libraries, rather than proxies.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

True, but that doesn't necessarily matter if I can compromise the privileged app instead. I could replace it, modify it on disk, or really any number of things in order to get myself a hook into a privileged position.

Just injecting code in some function call which launches malware.exe would do the trick. Ofc signature checks and the like can help here - but those aren't a given. There's any number of ways you can elevate yourself on a system based off of user security if your threat model is malicious processes. Linux (and windows) will stop users from accessing each other's crap by default, but not processes.

Or: supply chain attacks. Now your official app without any modifications is malicious.

[–] [email protected] 1 points 11 months ago

Yep! You can also get pretty far even without containers. At the end of the day containers are just sandboxing using namespaces, and systemd can expose that pretty trivially for services, and tools like bubble wrap / flatpak let you do it for desktop apps. In an ideal world every package would only use the namespaces it needs, and stuff like this would largely not be a concern.

[–] [email protected] 5 points 11 months ago* (last edited 11 months ago) (4 children)

The idea is malware you installed would presumably run under your user account and have access. You could explicitly give it different UIDs or even containerize it to counteract that, but by default a process can access everything it's UID can, which isn't great. And even still to this day that's how users execute a lot of processes.

Windows isn't much better here, though.

view more: next ›