_zi

joined 1 year ago
[–] [email protected] 3 points 11 months ago

Namespaces basically are a sort of kernel enforced isolation. A processes enters a namespace and to that process it might be root on its own machine. Behind the scenes the kernel is kinda translating everything it does so into its own little sandboxed area instead of the root system. But inside that namespaces it legitimately thinks it is the root user and can exercise most of the functional that is only exposed to privileged users. (f course the kernel limits what it can do to only being inside it's own little space so that alone isn't an issue.

When it comes to hardening, the namespaces are not inherently insecure. The difference is in the "attack surface" an unprivileged user has access to through them.

A simple example of this is mounting a filesystem. Now the user won't be able to like remount a privileged filesystem or something it'll be isolated. But let's say there is a vulnerability in the exact filesystem code in the kernel, your server doesn't mount any exfat drives and you disallow automounting of anything for hardening. So even if the issue exists an attacker couldn't exploit it because the exfat code isn't reachable as normal user. With a user namespaces though a user becomes root of their own little area so they can actually ask the kernel to mount something inside their namespace. So now with a namespace an attacker can get access to exploit their theoretical exfat filesystem vulnerability.

tl;dr the problem with having namespaces on is it allows unprivileged users access to a lot more "potentially" vulnerable code that could be exploitable.

[–] [email protected] 2 points 1 year ago (1 children)

I'm sorry, I don't. I'm kinda locked into my niche and don't consume much of the wider cybersecurity industry or have a handle on who would be a trusted resource outside of my particular realm in application security and vulnerability research.

For at-least some insight, I can recommend https://www.youtube.com/@cwinfosec its a pretty small channel, but he has some great "Interview with a ..." content. I enjoyed his interview with Alh4zr3d on red teaming experience. Most of the interviews are more offensive security focused, but he has a few different jobs that he's interviewed and can give some exposure to the type of work being done.

Microsoft's Security Response Center has also started a podcast called The BlueHat Podcast I haven't listened to a ton of it yet but they seem to have a decent variety of professionals on talking about stuff which can potentially be a source.

[–] [email protected] 2 points 1 year ago (3 children)

the school I’m transferring to has a cybersecurity degree designed to pick up where my AS leaves off.

(Disclaimer, I'm speaking from US and Canada based experience)

Be careful with CyberSecurity programs; it sounds great but there is no standard regarding what a cybersecurity degree even should be. Which means every place offering one can do whatever they want. Some programs are fine, some are lacking, regardless you have to make sure its actually preparing you for whatever part of security you're actually interested in. It also means that on the hiring side, people won't know exactly what its value is without looking into your specific program (which they probably won't do). Which puts it at a lesser value than a more predictable degree. Still often acceptable at least but worth calling out.

If you're new I'd also strongly encourage you to learn about different facets of cyber security; it is an absolutely massive field and different areas have different expectations. A lot of people have a misunderstandings about security jobs look like.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

These are in no particular order, just thinking back over some that I've read in recent years.

  • The Cuckoo's Egg - Really interesting book about running a honeypot and trying to tracking down a hacker who was stealing resources from Lawrence Berkeley Lab machines. Its based on actual events has some fun insights into the tech of the time and it had a fairly gripping plot despite the age.

  • Cult of the Dead Cow - First while of this book was just history and stories about the cDc from its members. From the joining of key members and becoming a hacking group, then into its hacktivism and more professional work. The later parts of the book tie into Beto O'Rourke (who was part of the cDc) political campaign and the tone kinda shifts a bit. Wasn't like it ruined the book or something, but it was a distinct shift in tone different from the parts that hooked me into it.

  • The Hacker and the State - This was a look at effectively cyberwar through the years and how/why it hasn't really turned out how people predicted being less destructive but more pervasive. Kinda gave a good, as far as I can tell fact-based perspective on the geopolitics of cyberattacks and how its developed.

  • Dark Territory: The Secret History of Cyber War - Similar concept to The Hacker and the State but more narrow focus. Just looking at the development of cyber-capabilities and use in the US.

  • No Place to Hide - Okay, maybe not exactly computer security related. Its more the behind the scenes of the Snowden leaks. Obviously the leaks do touch on security and they talk about their opsec in communicating before actually meeting. That behind the scenes aspect was most interesting to me, but it did go into what was leaked and such also. I'll also shout out Permanent Record which just ties in nicely with No Place to Hide. Its Snowden's memoir.

  • Little Brother - So this one isn't on audible as the author Cory Doctorow is outspoken against the DRM systems. Its a fictional book following a high-school student who becomes a reluctant hacker for civil liberties and privacy. The cool thing about the book is that it accurately represents technology, and explains things like how TOR works, about public key crypto, VPNs, etc; and it does so accurately, albeit sometimes superficially. I've done a poor job summarizing but Mudge at DefCon 21 mentioned the book is used as training material at the NSA to give recruits a different point of view. Bruce Schneier and Andrew "bunnie" Huang both have essays included as afterwords in the book which you wouldn't usually find in a fictional hacking book. It definitely captures some of the counter-cultural ideals that existed in the hacking community in the mid-00s and earlier. Even though its not on audible I'd still recommend it.

[–] [email protected] 6 points 1 year ago (1 children)

I agree with Daniel here that there is a problem, but I'm not sure I agree that NVD (or really, whoever the CVE Number Authority [CNA] for curl is) should be the party responsible for determining the CVSS score. It seems to me that apart from the cases where the CNA is the vendor the CNA will likely lack the context and understanding to appropriately score any reported issue. I'm not sure I'd agree that it should be any CNA's job to verify all the CVSS scores. That would create an immense amount of work, that is better offloaded onto the reporter and the vendor.

I think there are a few issues at play here:

  1. No vendor involvement before publicly declaring the critical vulnerability
  2. The researchers inappropriate CVSS score
  3. Companies that use CVSS scores as a proxy for criticality and priority

The first point, is usually not the case as I understand it. Each CVE by default needs some sort of acknowledgment of the issue existing from the vendor. Someone can't just file for a CVE, saying there is an issue without some other evidence of it, there is some process for hostile and non-responsive vendors, but by default something from the vendor needs to indicate the issue exists. In this case the PR for the bug acknowledges the presence of an integer overflow which was probably enough for the CNA to go forward without further vendor involvement.

I feel like this is wrong, and that the vendor should get some involvement even when dealing with older bugs, especially those vendors like curl who have a history with dealing with CVEs in a non-hostile way. There is usually some communication during the CVE process so with older bugs like this case it should continue. Not sure what the official policy on this looks like, but it feels like the primary change that could be made.

The second point, the CVE's CVSS score by the researcher is simply wrong[0]. I think this could have been solved with vendor involvement though so I won't dwell much on this. Except to call out two common problems. One being artificially inflating CVSS scores by researchers; this is largely because of "clout" and because some bounties use it to determine payouts. The second issue being researchers who may understand how to find the bug but not how to score it's impact just copying the CVSS from a seemingly similar report. That can work with like an XSS or something, but not so much with memory corruption issues. I feel like this is almost cultural, so many people see a critical CVE as some milestone.

Lastly, dependency on CVSS scores. I just don't think CVSS accurately reflects the impact in many cases these days. So many companies treat CVEs and their CVSS score as the final word on prioritization though, and so when something come out with a high score, many places panic while actually meaningful issues go under-recognized. Not sure of a solution to this that can scale though.

Anyway, this is all going fairly off topic from the problem raised in the original post, but I wanted to write out some of my own thoughts on the CVE system and its issues.

[0] 9.8 (CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H). Curl itself is a local binary, there is a minimal network attack surface (processing server responses), this bug is all local, but the access vector is "network". Confidentiality and Integrity are not impacted by the bug alone at all (CVSS has them as "high"). Any data curl might access you'd necessarily already need access to as the local user creating the request curl sends. Availability is also set to "high" but realistically its a self-dos at worst impacting only the one run of the program. T

[–] [email protected] 2 points 1 year ago

That is generally what I'd recommend, and have liked seeing in a resume.

My thinking is that seeing projects tends to showcase not just a particular skill like with a language you used, but shows an understanding of the problems facing some area that your project is trying to solve. I've never really been a fan of skills listings just because they offer basically no context. Whereas projects give me something to bounce off of in an interview, and hopefully get the candidate talking.

I will say though that I wasn't the person reviewing resumes deciding who got an interview, I've just been an interviewer after someone made it through the screening.

[–] [email protected] 14 points 1 year ago (3 children)

Figured I'd expand on something Alex said in response to you.

Client side should not hash the password which I am fairly sure would allow pass-the-hash, but don’t quote me on that.

Basically hashing it on the client doesn't solve the problem it just shifts it a bit. Instead of needing to capture and then send the plaintext password to the server. An attacker would simply need to capture and send the hash as generated by the client to the server. In both cases an attacker with access to the plain communication between client and server would have all the information necessary.

Basically if you hash it on the client-side, you've just made the hash the password that needs to be protected as an attacker only needs to "pass the hash" to the server.


That said you are raising a legitimate concern and its a great question that shows you're starting to think about the issues at hand. Because, you're right. When we send the password in plaintext at the application layer we are simply trusting that the communication channel is secure, and that is not a safe assumption.

There is a bit of a rabbit hole regarding authentication schemes you can dive into and there is a scheme that adds a bit more onto the simple idea of just hashing the password on the client-side. Basically, the server provides a nonce (a one-time use value) to the client. The client hashes their password with this nonce included and sends the resultant hash back to the server to be validated. It kinda solves the issue of someone being able to read the communication as the hash being sent over the wire is only useful in response to that specific nonce for that specific user.

The trade-off on this is that in-order for the server to be able to validate the response from the client, the server must have access to that same key-data the client hashed with the nonce, AKA passwords needs to be stored in a recoverable way. You increase security against a compromised communication channel, but also increased the damage that an attacker could do if they could leak the database.

Going further down the rabbit hole, there is Salted Challenge-Response Authentication which takes a step towards alleviating this by storing a salted and hashed version of the password. And then providing the client the nonce as usual along with the salt and other information needed for the client to reproduce the version of the hash the server is storing. This does mean passwords are not in "plaintext" but it has in effect made the hashed version the password that should be protected. Anyone who compromises the database still has all the information necessary to generate the response for any nonce. They just couldn't try for password reuse stuff like one could if it was actually recoverable.

Ultimately, this comes down to what is the bigger threat. You can (somewhat) secure against a compromised communication channel (which is generally a targeted attack against only one user at a time), but it means that some server side vulnerabilities will be able to compromise every user account. In general, for web-apps I think you're better off hardening the server-side and having mitigations like 2FA around sensitive actions to limit the damage just compromising the password could do.

Also, if you really wanted to be more secure against communication channel issues, public key cryptography is a better tool for that, but has generally not be well supported for web-apps.

[–] [email protected] 6 points 1 year ago (2 children)

Since I've made my career on the AppSec and research side of the fence I do have a few recommendations on that side of things:

Absolute AppSec - Discussion of the week sort of podcast, from a couple of experience AppSec guys. I originally came across them because they seem to be one of the few resources really talking details about source code review (they offer a training on it). Which is just one of those areas that kinda easy to talk about but really hard to teach (imo). But yeah they'll generaly just discuss a few topics from recent news and how it impacts AppSec. Good variety here, sometimes offensive, sometimes defensive, sometimes its something else. The hosts occasionally will disagree and will have some solid discussions on it.

Critical Thinking: Bug Bounty Podcast - More of a bug bounty focus. While priorities differ between more general AppSec assessments and bug bounty, there is enough overlap to make the podcast worthwhile. Fairly discussional podcast, kinda a discussion of the week sometimes riffing off of recent vulnerability disclosures but also getting into other aspects like tooling and methodology.

Dayzerosec - I cohost this podcast with a friend, we both work in vulnerability research and exploit development so we are kinda just doing a podcast on what we would find interesting. talking about root causes, and exploitation of whatever interesting bugs were disclosed in the last week-ish. Its a very technical podcast and we don't really tend to cover news/attacks. We do two episodes a week, one focused on "bug bounty" style issues, just higher-level appsec and websec stuff, and one lower-level memory corruption and occasionally hardware layer attacks. Though we also put out summaries of many of the vulnerabilities we cover https://dayzerosec.com/vulns