this post was submitted on 13 Nov 2023
1 points (100.0% liked)

Data Hoarder

0 readers
3 users here now

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

founded 1 year ago
MODERATORS
 

I should preface this by mentioning that I'm not really a data hoarder, but I've been tasked with downloading and preserving the content from some MediaWiki based wikis, and a couple of Fandom ones too (which to my knowledge) in order to preserve their contents (which I feel is in the spirit of this sub.)

So far I've managed to figure out that I need to use WikiTaxi, but the trouble I'm running into is that, while Wikipedia XML dumps are readily available for download, other Mediawiki sites are not so forthcoming from what I've seen so far.

Is there any way that I can get my hands on those? Preferably through a self-contained program or similar that doesn't necessitate running Python or similar things (I'm by no means nearly tech-savyy enough to know how to use Python and would like to avoid such things since there are a few Wikis and would like to simplify the process as much as humanly possible).

I would genuinely appreciate any kind of advice or information any of you might be able to provide me with.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago

Use the wikiteam's dumpgenerator.