this post was submitted on 02 Nov 2023
29 points (93.9% liked)

Selfhosted

39435 readers
3 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I've posted a few days ago, asking how to setup my storage for Proxmox on my Lenovo M90q, which I since then settled. Or so I thought. The Lenovo has space for two NVME and one SATA SSD.

There seems to a general consensus, that you shouldn't use consumer SSDs (even NAS SSDs like WD Red) for ZFS, since there will be lots of writes which in turn will wear out the SSD fast.

Some conflicting information is out there with some saying it's fine and a few GB writes per day is okay and others warning of several TBs writes per day.

I plan on using Proxmox as a hypervisor for homelab use with one or two VMs runnning Docker, Nextcloud, Jellyfin, Arr-Stack, TubeArchivist, PiHole and such. All static data (files, videos, music) will not be stored on ZFS, just the VM images themselves.

I did some research and found a few SSDs with good write endurance (see table below) and settled on two WD Red SN700 2TB in a ZFS Mirror. Those drives have 2500TBW. For file storage, I'll just use a Samsung 870EVO with 4TB and 2400TBW.

SSD TB TBW
980 PRO 1TB 600 68
2TB 1200 128
SN 700 500GB 1000 48
1TB 2000 70
2TB 2500 141
870 EVO 2TB 1200 117
4TB 2400 216
SA 500 2TB 1300 137
4TB 2500 325

Is that good enough? Would you rather recommend enterprise grade SSDs? And if so, which ones would you recommend, that are m.2 NVME? Or should I just stick with ext4 as a file system, loosing data security and the ability for snapshots?

I'd love to hear your thought's about this, thanks!

top 21 comments
sorted by: hot top controversial new old
[–] [email protected] 17 points 1 year ago (1 children)

ZFS doesn't eat your SSD endurance. If anything it is the best option since you can enable ZSTD compression for smaller reads/writes and reads will often come from the RAM-based ARC cache instead of your SSDs. ZFS is also practically allergic to rewriting data that already exists in the pool, so once something is written it should never cost a write again - especially if you're using OpenZFS 2.2 or above which has reflinking.

My guess is you were reading about SLOG devices, which do need heavier endurance as they replicate every write coming into your HDD array (every synchronous write, anyway). SLOG devices are only useful in HDD pools, and even then they're not a must-have.

IMO just throw in whatever is cheapest or has your desired performance. Modern SSD write endurance is way better than it used to be and even if you somehow use it all up after a decade, the money you save by buying a cheaper one will pay for the replacement.

I would also recommend using ZFS or BTRFS on the data drive, even without redundancy. These filesystems store checksums of all data so you know if anything has bitrot when you scrub it. XFS/Ext4/etc store your data but they have no idea if it's still good or not.

[–] [email protected] 3 points 1 year ago (1 children)

Thank you so much for this explanation. I am just a beginner, so those horror stories did scare me a bit. I also read, that you can fine tune ZFS to prevent write amplification so I'll read into that subject a bit more.

I thought ZFS without redundancy did give no benefits, but I most have gotten that wrong. Thanks again!

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (2 children)

ZFS without redundancy is not great in the sense that redundancy is ideal in all scenarios, but it's still a modern filesystem with a lot of good features, just like BTRFS. The main problem will be that it can detect data corruption but not heal it automatically. Transparent compression, snapshotting, data checksums, copy-on-write (power loss resiliency), and reflinking are modern features of both ZFS/BTRFS, and BTRFS additionally offers offline-deduplication, meaning you can deduplicate any data block that exists twice in your pool without incurring the massive resources that ZFS deduplication requires. ZFS is the more mature of the two, and I would use that if you've already got ZFS tooling set up on your machine.

Note that the TrueNAS forums spread a lot of FUD about ZFS, but ZFS without redundancy is ok. I would take anything alarmist from there with a grain of salt. BTRFS and ZFS both store 2 copies of all metadata by default, so bitrot will be auto-healed on a filesystem level when it's read or scrubbed.

Edit: As for write amplification, just use ashift=12 and don't worry too much about it.

[–] [email protected] 2 points 1 year ago (1 children)

Where can I read more about good ZFS settings for a filesystem on a new RAID6 array? I don't want to manage disks or volumes with ZFS, I'll be doing that with mdadm, just want ZFS as filesystem instead of ext4. I assume a ZFS filesystem can grow if the space available expands later?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

ZFS can grow if it has extra space on the disk. The obvious answer is that you should really be using RAIDZ2 instead if you are going with ZFS, but I assume you don't like the inflexibility of RAIDZ resizing. RAIDZ expansion has been merged into OpenZFS, but it will probably take a year or so to actually land in the next release. RAIDZ2 could still be an option if you aren't planning on growing before it lands. I don't have much experience with mdadm, but my guess is that with mdadm+ZFS, features like self-healing won't work because ZFS isn't aware of the RAID at a low-level. I would expect it to be slightly janky in a lot of ways compared to RAIDZ, and if you still want to try it you may become the foremost expert on the combination.

[–] [email protected] 1 points 1 year ago (1 children)

I assume you don’t like the inflexibility of RAIDZ resizing

Right, I'd like to be able to add another disk and then grow the filesystem and be done with it.

my guess is that with mdadm+ZFS, features like self-healing won’t work because ZFS isn’t aware of the RAID at a low-level

Really, I'll have to look into that then because health checks are my main reason for using ZFS over ext4.

mdadm RAID should be a transparent layer for ZFS, it manages the array and exposes a raw storage device. Not sure why ZFS would not like that but I don't want to experiment if it's not a reliable combination. I was under the impression that ZFS as a filesystem can be used without caring about the underlying disk support, but if it's too opinionated and requires its own disk management then too bad...

[–] [email protected] 1 points 1 year ago (1 children)

The main problem with self-healing is that ZFS needs to have access to two copies of data, usually solved by having 2+ disks. When you expose an mdadm device ZFS will only perceive one disk and one copy of data, so it won't try to store 2 copies of data anywhere. Underneath, mdadm will be storing the two copies of data, so any healing would need to be handled by mdadm directly instead. ZFS normally auto-heals when it reads data and when it scrubs, but in this setup mdadm would need to start the healing process through whatever measures it has (probably just scrubbing?)

[–] [email protected] 1 points 1 year ago (2 children)

Today, growing a pool is possible by adding a vdev, right?

So, instead of RAIDZ2, one could setup their pool with mirrored vdevs.

However, I'm not sure about the self-healing part. Would it still work with mirrored vdevs, especially when my vdevs consist of two physical drives only?

[–] [email protected] 2 points 1 year ago

Mirrored vdevs allow growth by adding a pair at a time, yes. Healing works with mirrors, because each of the two disks in a mirror are supposed to have the same data as each other. When a read or scrub happens, if there's any checksum failures it will replace the failed block on Disk1 with Disk2's copy of that block.

Many ZFS'ers swear by mirrored vdevs because they give you the best performance, they're more flexible, and resilvering from a failed mirror disk is an order of magnitude faster than resilvering from a failed RAIDZ - leaving less time for a second disk failure. The big downside is that they eat 50% of your disk capacity. I personally run mirrored vdevs because it's more flexible for a small home NAS, and I make up for some of the disk inefficiency by being able to buy any-size disks on sale and throw them in whenever I see a good price.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

FYI: RAIDZ expansion just got merged: https://github.com/openzfs/zfs/pull/15022

Estimated timeline is about a year from now for OpenZFS 2.3 which will include it.

[–] [email protected] 1 points 1 year ago (1 children)

I barely scratched the surface with ZFS, so I'm not going to touch another file system for a while now. I'm fine with detecting data corruption only, since those files (on the static data storage) can be replaced easily and hold no real value for me. All other data will be either on the redundant pool or is saved to several other media and even one off-site copy.

I already wrote down ashift=12 in my notes for when I set it up.

In general, I found there is a lot of FUD out there when it comes to data security. One I liked a lot was ECC RAM being mandatory for ZFS. Then one of the creators of it basically said: "Nah, it's not needed more than for any other file system'.

[–] [email protected] 2 points 1 year ago

Yeah ECC RAM is great in general but there's nothing about ZFS that likes ECC more than any other thing you do on your computer. You are not totally safe from bit flips unless every machine in the transaction has ECC RAM. Your workstation could flip a bit on a file as it's sending it to your ZFS pool, and your ECC'd ZFS pool will hold that bit flip as gospel.

[–] [email protected] 8 points 1 year ago (1 children)

I'm kinda repeating things already said here, but there's a couple of points I wanted to highlight...

Monitor the SMART health: Enterprize and consumer drives fail, it's good to know in advance.

Plan for failure: something will go wrong... might be a drive failure, might be you wiping it by accident... just do backups.

Use redundancy; several cheapo rubbish drives in a RAID / ZFS / BTRFS pool are always better than 1 "good" drive on it's own.

Main point: build something and destroy it to see what happens, before you build your "final" setup - experience is always better than theory.

I built my own NAS and was going with ZFS until I fkd around with it.. for me... I then went with BTRFS because of my skills, tools I use, etc... BTRFS just made more sense to me... so I know I can repair it.

And test your backups 🎃

[–] [email protected] 2 points 1 year ago

I'm currently playing around in VMs even before I order my hard drives. Just to see, what I can do. Next up is to simulate a root drive failure and how to replace that. I also want to test rolling back from snapshots.

The data that I really do need and can't replace is redundant anyway: one copy on my PC, one on my external HDD, one on my NAS and one on a system at my sisters place. Thats 4 copies on several media (one cold) and at another place. :)

[–] [email protected] 7 points 1 year ago (1 children)

Don't sweat it.

I remember looking into this as well like a year ago. I also found the same info and started to look into ssds, consumer and enterprise grade and after all that I realised that most of it is just useless fuzzing about. Yes it is an interesting rabbit hole in which I spent a week probably. In the end one simple thing nullifies most of this: you can track writes per day and SSD health. It is not like you need to somehow made a guess when the drives fail. You do not. Keep track of the health and writes per day and you will get a good sense of how your system behaves. Run that for 6 months and you are infinitely wiser when it comes to this stuff.

[–] [email protected] 3 points 1 year ago (1 children)

That rabbit hole is interesting, but also deep and scary. I'm trying to challenge myself by setting up Proxmox, as so far I've just used Raspbery Pis as well as OpenMediaVault. So when I saw those stories about drives dying after 6 months, I was a bit concerned;. Especially because I can't yet verify the truth in those storries, since I'd call myself and advanced novice if I', being generous.

I'll track drive usage and wear and see what my system does. Good point, then I can get rid of the guesswork. Thank you a lot!

[–] [email protected] 1 points 1 year ago
[–] [email protected] 6 points 1 year ago (1 children)

I'll agree with the other commenter here.

Also there may not be any difference between the consumer and enterprise drives. The reason the enterprise cost more is the better warranty. But because they have different components.

Monitor the drives, modern drives are pretty good at predicting when they are dying, and replace it necessary.

[–] [email protected] 1 points 1 year ago

Yeah, concering TBW there wasn't a huge difference between cosumer- and enterprise drives that I saw. Something along 2500TBW vs. 3500TBW (unless you go with those unaffordable drives, then yes). I'll monitor the drives and if I find rapidly increasing wear, I can still switch to another file system. The whole reason I bought the Lenovo is to setup a second machine and experiment, while I still have a running "production" system. Thank you!

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SSD Solid State Drive mass storage

3 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

[Thread #259 for this sub, first seen 2nd Nov 2023, 14:30] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] -2 points 1 year ago

I am in the same boat as you are