Check out Linux builtin dm-integrity
Data Hoarder
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
Store backups using rar and a data recovery record of 6% or larger and you won’t have to worry about bot rot or minor sectors going bad.
copy / paste of my previous post
Silent bit rot where a bit flips but there is no hardware is extremely rare. My stats say once a year on 300TB of data. Some statistics major can correct me but if someone has 1TB of data then they should see a single bit flip in 300 years so maybe their great great great grandchildren will see it and report back to them in a time machine.
All of my data is on ordinary ext4 hard drives. I buy all my drives in groups of 3. I have my local file server, local backup, and remote backup. I have 2 drives in the local file server dedicated for snapraid parity and run "snapraid sync" every night.
Snapraid has a data scrub feature. I run that once every 6 months to verify that my primary copy of my data in my file server is still correct.
Then I run cshatag on every file when generates SHA256 checksums and stores them as ext4 extended attribute metadata. It compares the stored checksum and stored timestamp and if any file has changed but the timestamp wasn't edited it reports it as corrupt.
https://github.com/rfjakob/cshatag
Then I use rsync -RHXva when I make my backups via rsync of all my media drives. This data is almost never modifed, just new files are added. The -X option is to also copy over the extended attribute metadata. Then I run the same cshatag file on the local backup and remote backup server. This takes about 1 day to run. On literally 90 million files across 300TB it finds a single file about once a year that has been silently corrupted. I have 2 other copies that match each other so I overwrite the bad file with one of the good copies.
I only run rsnapshot on /home because that is where my frequently changing files are. The other 99% of my data is maybe "write only" so I just use rsync from the main file server to the two backups. Before I run rsync for real I use rsync --dry-run to show what WOULD change but it doesn't do anything. If I see the files I expect to be written then I run it for real. If I were to see thousands of files that would be changed I would stop and investigate. Was this a cryptolocker virus that updated thousands of files?
As for backing up the operating system I have the /etc and /root account backed up every hour through rsnapshot along with /home
I'm not running a business. I can reinstall Linux in 15 minutes on a new SSD and copy over the handful of files I need from the /etc backup
Man... thanks! You are a really Master! It is not clear to me the further steps after the snapraid activities but I have to read it with more attention, I think.
Checksum your files, as you say. Just md5sum or sfv tools will do this and are quick and easy to check at any time.
If you want a way to recover as well, create a certain % of parity files with a tool like par2. If you're just worried about the occasional flipped bit, a very small % of parity will go a really long way. Creating par2 is NOT super fast but you only have to do it once.
This is basically file-level RAID.
Interesting, I will go deep. I think I should add or reformulate the question or I am missing a thing: I first have to avoid - or check - that source files are not bitrotten, otherwise all backups will be have bitrotten data, right? I mean, I might checksum but on working data how can I have evidence of it? And on "cold" data - f.e., pictures - how can be sure that source ruined files are not copied onto the backups? I cant - because of time - checksum 1tb of data every night...
Backups with versioning should solve the issue as soon as you would be able to identify which data got corrupted.
And there is a best practise to do so ... ?
Best practices of making backups?