this post was submitted on 13 May 2025
598 points (98.9% liked)

Technology

70031 readers
5494 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 2 days ago (2 children)

Nobody is scraping wikipedia over and over to create datasets for AIs, there are already open datasets and API deals. But wiki in particular has always had a data dump of the entire db bimonthly.

https://dumps.wikimedia.org/

[–] [email protected] 16 points 2 days ago (1 children)

You clearly haven't run a website recently. Until I set up anubis last week I was getting constant requests from dozens of various bot scrapers 24/7. That included the big ones.

[–] [email protected] -4 points 2 days ago (1 children)

Kay, and that has nothing to do with what i said. Scrapers, bots =/= AI. It's not even the same companies that make the unfree datasets. The scrapers and bots that hit your website are not some random "AI" feeding on data lol. This is what some models are trained on, it's already free so it's doesn't need to be individually rescraped and it's mostly garbage quality data: https://commoncrawl.org/ Nobody wastes resources rescraping all this SEO infested dump.

Your issue has everything to do with SEO than anything else. Btw before you diss common crawl, it's used in research quite a lot so it's not some evil thing that threatens people's websites. Add robots.txt maybe.

[–] [email protected] 14 points 2 days ago (1 children)

Oh ok I'll just ignore the constant requests from GPTBot, ByteSpider, and the hundreds of others who very plainly, sometimes in their useragent, tell you that they're grabbing content for training data. Robots.txt is nice and all but manually adding every single up and coming AI company is impossible. Like I said Anubis is the first time I've gotten them all to even remotely calm down.

[–] [email protected] 1 points 14 hours ago (1 children)

Bots only identify themselves and their organization in the user agent, they don't tell you specifically what they do with the data so stop your fairytales. They do give you a really handy url though with user agents and even IPs jn json if you want to fully block the crawlers but not the search bots sent by user prompts.

Your ad revenue money can be secured.

https://platform.openai.com/docs/bots/

If for some reason you can't be bothered to edit your own robots.txt (because it's hard to tell which bots are search bots for muh ad money) then maybe hire someone.

[–] [email protected] 1 points 8 hours ago

Lmao you linked to the same page I did where this text appears:

GPTBot is used to make our generative AI foundation models more useful and safe. It is used to crawl content that may be used in training our generative AI foundation models.

Also you're so capitalism brained you assume anyone running a website must be doing so for profit. My hobby projects (personal homepage and personal git forge) were getting slammed by bots while I just paid the bills. I could have locked them both behind an auth portal but then I might as well just take them off the internet and run everything on my LAN.

[–] [email protected] 2 points 2 days ago (1 children)

But with the rise of AI, the dynamic is changing: We are observing a significant increase in request volume, with most of this traffic being driven by scraping bots collecting training data for large language models (LLMs) and other use cases. Automated requests for our content have grown exponentially, alongside the broader technology economy, via mechanisms including scraping, APIs, and bulk downloads. This expansion happened largely without sufficient attribution, which is key to drive new users to participate in the movement, and is causing a significant load on the underlying infrastructure that keeps our sites available for everyone.

- https://diff.wikimedia.org/2025/04/01/how-crawlers-impact-the-operations-of-the-wikimedia-projects/

[–] [email protected] 0 points 14 hours ago

via mechanisms including scraping, APIs, and bulk downloads.

Omg exactly! Thanks. Yet nothing about having to use logins to stop bots because that kinda isn't a thing when you already provide data dumps and an API to wikimedia commons.

While undergoing a migration of our systems, we noticed that only a fraction of the expensive traffic hitting our core datacenters was behaving how web browsers would usually do, interpreting javascript code. When we took a closer look, we found out that at least 65% of this resource-consuming traffic we get for the website is coming from bots, a disproportionate amount given the overall pageviews from bots are about 35% of the total.

Source for traffic being scraping data for training models: they're blocking javascript therefore bots therefore crawlers, just trust me bro.