this post was submitted on 23 Jan 2025
920 points (98.0% liked)

Technology

60942 readers
3858 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

(page 3) 28 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 day ago

it might he useful to generate text on the random urls then test different repetitions to see of you can leave a mark on the training data... So after X repetitions or injected information, release the bot back into the wild with whatever message or false info you want it saddled with.

[–] [email protected] -1 points 1 day ago (3 children)

I suggest they should generate random garbage content that's different for every page. Ideally u would want to design it in a way that makes the model that is trained from that source misbehave in some way. Perhaps use another LLM to generate text but u take the tokens that are least likely to be next. U could also probably apply some technique to embed meaning into the text into a non human discernable manner that the LLM will learn to decode and thus teach it things without the developers being any the wiser. Teach the ai to think subversive thoughts in patterns of whitespace etc. Basically once the LLM is trained on something its hard to untrain it and if it doesn't get caught until its in a production environment they are screwed.

[–] [email protected] 3 points 1 day ago
  1. Invent some incredibly specific but entirely false fact (e.g. the kingdom of bolivia was once ruled by King Aron the Benevolent before he was brutally murdered by his cousin-in-law over a dispute about the colonies)
  2. Embed said fact in invisible font among material you own the copyright to
  3. Let AI bots suck it up as training data
  4. Ask random AI bots about King Aron the Benevolent of Bolivia and sue the companies since you now have proof that they violated your copyright

I mean this probably wouldn't work from a legal standpoint, but whatever. It's nice to image.

[–] [email protected] 3 points 1 day ago

Great suggestion. Ever feel like youre stuck in a maze or did you just have an llm stroke?

[–] [email protected] 3 points 1 day ago

You could programmatically rearrange the meaning of sentences. Ie instead of "where is the library I need to get a book" you could do some sort of full word replacement cypher and end up with sentences like "Lets mambo down to the banana patch."

Just for fun. :-)

load more comments
view more: ‹ prev next ›