this post was submitted on 28 Feb 2024
45 points (94.1% liked)

Selfhosted

39435 readers
6 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

There are quite a few choices of brands when it comes to purchasing harddisks or ssd, but which one do you find the most reliable one? Personally had great experiences with SeaGate, but heard ChrisTitus had the opposite experience with them.

So curious to what manufacturers people here swear to and why? Which ones do you have the worst experience with?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 8 months ago (2 children)

In general and simplifying, my understanding is:

There is the area where data is written, and there is the File Allocation Table that keeps track of where files are placed.

When part of a file needs to be overwritten (either because it inserted or there is new data) the data is really written to a new area and the old data is left as is. The File Allocation Table is updated to point to the new area.

Eventually, as the disk gets used, that new area eventually comes back to a space that was previously written to, but is not being used. And that data gets physically overwritten.

Each time a spot is physically overwritten, it very very slightly degrades.

With a larger disk, it takes longer to come back to a spot that has already been written to.

Oversimplifying, previously written data that is no longer part of a file is effectively lost, in the way that shredding a paper effectively loses whatever is written, and in a more secure way than as happens in a spinning disk.

[–] [email protected] 2 points 8 months ago (1 children)

I thought you meant 1 TB as a sort of peak performer (better than 2+ TB) in this area. From the description, it's more like 1 TB is kinda the minimum durability you want with a drive, but larger drives are better?

[–] [email protected] 2 points 8 months ago (1 children)

From the drives I have seen, usually there are 3 write-cache sizes.

Usually the smallest write-cache is for drives 128GB or smaller. Sometimes the 256GB is also here.

Usually the middle size write-cache is for 512GB and sometimes 256GB drives.

Usually the largest write-cache is only in 1TB and bigger drives.

Performance-wise for writes, you want the biggest write cache, so you want at least a 1TB drive.

For the best wear leveling, you want the drive as big as you can afford, while also looking at the makeup of the memory chips. In order of longest lasting listed first: Single Level, Multi Level, Triple Level, Quad Level.

[–] [email protected] 2 points 8 months ago

This is great, thank you! My next drive is going to be fast and durable.

[–] [email protected] 2 points 8 months ago (2 children)

Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.

Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, there's still perfectly good storage at that address, albeit with a potential one-off read error.

The larger sizes SSD just gives the firmware more extra blocks to pull from.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago) (2 children)

Does that mean that manually attempting to overprovision SSDs isn’t necessary for maximising endurance? Eg. partition a 1TB SSD as 500GB.

[–] [email protected] 2 points 8 months ago (1 children)

That would be called under-provisioning.

I haven't read anything about how an SSD deals with partitions, so I don't know for sure.

Since the controller intercepts the calls for specific locations, I'm inclined to believe that the controller does not care about the concept of partitions and does not segregate any chips, thus it would spread all writes across all of the chips.

[–] [email protected] 1 points 8 months ago

Isn’t it overprovisioning because you’re artificially limiting the usable capacity of a volume?

https://www.techtarget.com/searchstorage/definition/overprovisioning-SSD-overprovisioning

[–] [email protected] 1 points 8 months ago

As the other person said, I don't think the SSD knows about partitions or makes any assumptions based on partitioning, it just knows if you've written data to a certain location, and it could be smart enough to know how often you're writing data to that location. So if you keep writing data to a single location, it could decide to logically remap that location in logical memory to different physical memory so that you don't wear it out.

I say "could" because it really depends on the vendor. This is where one brand could be smart and spend the time writing smart software to extend the life of their drive, while another could cheap out and skip straight to selling you a drive that will die sooner.

It's also worth noting that drives have an unreported space of "spare sectors" that it can use if it detects one has gone bad. I don't know if you can see the total remaining spare sectors, but it typically scales with the size of a drive. You can at least see how many bad sectors have been reallocated using S.M.A.R.T.