this post was submitted on 15 Nov 2023
1 points (100.0% liked)
Data Hoarder
0 readers
3 users here now
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
WGET is awesome, I have scraped tons with it. So many options, you can even spoof all the request header info to get around sites that try to limit auto downloaders. Here is the manual: https://www.gnu.org/software/wget/manual/wget.html
https://addons.mozilla.org/en-CA/firefox/addon/dont-accept-webp/
Wget is not behaving identically to a browser so im unsure what this part of the request looks like or if it needs modification. If it isnt working let me know.
For future scraping, look at the mirror command. It sets recursion to infinite and will make a full copy of the site. You can also use the --convert-links option, which changes all the links to point to the locally downloaded files. It then behaves the same as the real website.
You cant go too deep unless you use --span-hosts, it can grab external files from different domains to make the mirrored site a true copy, but yea, you often don't need that. You also want to be more careful with recursive depth here - it can go too deep and you end up with too much data.
Some sites also need to use the wait or random-wait command to avoid detection.