this post was submitted on 23 Jan 2025
920 points (98.0% liked)
Technology
60942 readers
3878 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, that has like 0 chances for working. At most it would annoy bots for web search, at least it has a proper robots.txt.
But any agent trying to process data for AI is not going to go to random websites. It's going to use a curated list of sites with valuable content.
At this point text generation datasets can be achieved with open data, and data sold by companies like reddit or Microsoft, they don't need to "pirate" your blog posts.
True to a limited extent. Anyone can post a link to somebody's blog on a site like reddit without the blogger's permission, where a web crawler scanning through posts and comments would find it. But I agree with you that a thing like Nepehthes probably wouldn't work. Infinite loop detection is an important part of many types of software and there are well-known techniques for it, which as a developer I would assume a well written AI web crawler would have (although I've never personally made one).
scrape.maxDepth = 5
LOL wow, this is probably the most elegant way to say what I just said to somebody else. Well written web crawlers aren't like sci-fi robots that rock back and forth smoking when they hear something illogical.
What's stopping the sites with valuable content from using this?
A bot that's ignoring robots.txt is likely going to be pretending to be human. If your site has valuable content that you want to show to humans, how do you distinguish them from the bots?
I think sites that feel they have valuable content can deploy this and hope to trap and perhaps detect those bots based on how they interact with the tarpit