this post was submitted on 12 Nov 2023
2 points (100.0% liked)

Data Hoarder

0 readers
3 users here now

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

founded 1 year ago
MODERATORS
 

I'm looking to add some large Enterprise HDD's to my array and was wondering if a Long Smart Test would be sufficient before putting the drive into service?

I use Windows/Snapraid and have/use HDD Sentinel and also could run Read and/or write tests.

I'm curious what others testing methods are after a drive is shipped to them before putting it into service?

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 year ago

I’m a bit extreme. I run bad blocks non stop for a few days. (Usually 3-5) to simulate a workload my NAS will never have. Then I run a smart scan that scans the surface of the entire disk. A little overkill but it works.

[–] [email protected] 1 points 1 year ago

YOLO. Just add them to my array and pray.

[–] [email protected] 1 points 1 year ago

https://github.com/Spearfoot/disk-burnin-and-testing is what I've used historically. Generally speaking I only care about doing that on a fresh build of a NAS though. When I'm replacing a single drive, I'll do a long SMART and call it done, since the failure of that new drive won't matter (as much) in the context of a replacement in the array.