When looking at Seagate's description of the HAMR technology they state that "Each bit is heated and cools down in a nanosecond, so the HAMR laser has no impact at all on drive temperature, or on the temperature, stability, or reliability of the media overall" but I'm left wondering if I buy these drives as used when they get dumped after their 5 year enterprise life cycle then how much faith can I place on the "stability, or reliability of the media overall" after all that repeated heating? I'd be somewhat expecting more and more bad blocks to be cropping up with repeated use assuming it actually does degrade although I guess you could cope with it if you run something like ZFS Raid-Z2/Z3 stripes to deal with any problems as they occur.
HobartTasmania
joined 1 year ago
Not sure that the failure rate is all that important anymore because in the first instance most people are running SSD's as their primary drive and secondly who stores bulk data on solitary hard drives?
With easy to use filesystems like ZFS, I store data using Raid-Z (raid 5) or Raid-Z2 (raid 6) which is a bit more expensive, but it means a single hard disk failure is no longer a case of catastrophic data loss.
If you're going to be running a Raid-Z/Z2 stripe in your NAS and you're given a choice of buying hard drives with a 2% AFR or alternatively a different bunch of drives that have a 1% AFR but cost say 10%-20% more then which do you choose? Since you no longer have data loss with a single drive failure so then it's just an economic decision of which is the greater cost of either dealing with extra RMA's vs paying more upfront.