Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
My current setup is eight 18TB Exos drives, all purchased from Amazon's refurb shop, and running in a RAIDz2. I'm pulling about 450MB/s through various tests on a system that is in use. I've been running this about a year now and smartd hasn't detected any issues. I have almost never run new drives for my storage and the only time I've ever lost data was back when I was running mdadm and a power glitch broke the sync on multiple drives so the array couldn't be recovered. With zfs I have even run a RAID0 with five drives which saw multiple power incidents (before I got a redundant power supply) and I never once lost anything because of zfs' awesome error detection.
So yes, used drives can be just fine as long as you do your research on the drive models, have a very solid power supply, and are configured for hot-swapping so you can replace a drive when they fail. Of course that's solid advice even for brand new drives, but my last set of used drives (also from ebay) lasted about a decade before it was time to upgrade. Sure, individual drives took a dump over that time, this was another set of eight and I replaced three of them, but the data was always safe.
I got the same setup with eight 18TB Exos drives running in a RAIDz2 with an extra spare. Added to this though I got another vdev of eight 12 WD reds with another spare.
With this I can have 2 drives fail in a vdev at any point and still rebuild the pool. Though if more than 2 drives all fail at the same time the whole pool is gone.
But if that happens I have a second NAS offsite at my bro's place that I backup specific datasets. This is connected with tailscale and a zfs replication task.
I dunno, like I said zfs is pretty damn good at recovery. If the drives simply drop out but there's no hardware fault you should be able to clear the errors and bring the pool back up again. And the chances of two drives failing at the same time are pretty low. One of these days I do need to buy a spare to have on hand though. Maybe I'll even swap out one drive just to see how long it takes to rebuild.
Do it. The last thing you need during a rebuild is the stress of not knowing how long / other issues with your specific setup.
It's only "disaster recovery* if you've never practiced... orherwise it's just "recovery"