this post was submitted on 22 May 2025
19 points (100.0% liked)
Web Development
3957 readers
10 users here now
Welcome to the web development community! This is a place to post, discuss, get help about, etc. anything related to web development
What is web development?
Web development is the process of creating websites or web applications
Rules/Guidelines
- Follow the programming.dev site rules
- Keep content related to web development
- If what you're posting relates to one of the related communities, crosspost it into there to help them grow
- If youre posting an article older than two years put the year it was made in brackets after the title
Related Communities
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Wormhole
Some webdev blogs
Not sure what to post in here? Want some web development related things to read?
Heres a couple blogs that have web development related content
- https://frontendfoc.us/ - [RSS]
- https://wesbos.com/blog
- https://davidwalsh.name/ - [RSS]
- https://www.nngroup.com/articles/
- https://sia.codes/posts/ - [RSS]
- https://www.smashingmagazine.com/ - [RSS]
- https://www.bennadel.com/ - [RSS]
- https://web.dev/ - [RSS]
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
From my understanding, that's not quite the intent.
Currently, there are a bunch of bots that behave themselves. For example, Google's search crawler.
They identify themselves with a user agent, e.g. GoogleBot, so Cloudflare know what it is and don't block it.
Unfortunately, some bad bots pretend to be GoogleBot by setting the same user agent. To counteract this, Cloudflare compares the known IP address ranges with the traffic to make sure it's actually coming from Google. If it's not coming from a Google IP range but the user agent says it's Googlebot, they block it because it's probably bad.
But knowing which IPs are OK and which aren't is a challenge because they change over time.
So the proposal here, as I understand it, is to create a system whereby by publishing a public key, you can prove that GoogleBot really is from Google, AmazonBot is from Amazon, etc, and not another crawler pretending.
The spammy ones can keep generating new domains and keys, but you know for sure it's not Googlebot or whatever.
So it helps "good" traffic prove who it is, it's not supposed to be for tracking bad traffic.
I wonder how long it will be until they start requiring signatures for individual people.