this post was submitted on 23 Aug 2023
86 points (96.7% liked)
Programming
17489 readers
205 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It depends on what the DoS is targeting. If hashing is being done with an expensive hash function you can absolutely cause a lot of resource usage (CPU or memory depending on the hash) by sending long passwords. That being said this likely isn't a huge concern because only the first round needs to process the whole submitted data, the later rounds only work on the previous round's output.
Simple empty requests or connection opening attempts are likely to be stopped by the edge services such as a CDN and fleet of caches which are often over-provisioned. A targeted DoS attack may find more success by crafting requests that make it through this layer and hit something that isn't so overprovisioned.
So yes, many DoS attacks are request or bandwidth floods but this is because they are generic attacks that work on many targets. But that doesn't mean that all DoS attacks work this way. The best attacks target specific weaknesses in the the target rather than pure brute-force floods.
Well to be fair, if they're hashing serverside, they were doomed to begin with.
But yeah, there's a lot of ways to DDoS, and so many tools that just make it a 1 button click.
Who isn't hashing server-side? That just turns the hash into the password which negates a lot of the benefits. (You can do split hashing but that doesn't prevent the need to hash server-side.)
Hashing on client side is both more private, and secure. All the user ever submits is a combined hash (auth/pubkey) of their username + password.
If the server has that hash? Check the DB if it requires 2FA, and if the user sent a challenge response. If not, fail the login.
Registering is pretty much the same. User submits hash, server checks DB against it, fail if exists.
Edit: If data is also encrypted properly in the DB, it doesn't even matter if the entire DB is completely public, leaked, or secured on their own servers.
This means that the submitted hash is effectively a password. You get a minor benefit in that it obscures the original password in case it contains sensitive info or is reused. But the DB is now storing the hash password in plain text. This means that if the DB leaks anyone can just log in by sending the hash.
If you want to do something like this you would need some sort of challenge to prevent replay attacks.
This scheme would also benefit from some salt. Although the included username does act as a form of weak salt.
Per your edit, the DB being "encrypted properly" just means "hashing server side". There's little benefit (though not necessarily zero) to encrypting the entire database, since the key has to live in plaintext somewhere on the same system. It's also making the slowest part of most systems even slower.