this post was submitted on 10 Nov 2024
21 points (100.0% liked)

Selfhosted

40154 readers
717 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am planning on creating a home server with either 2 (RAID1) or 3 (RAID5) HDDs as bulk storage and 1 SSD as bcache.

The question is, what file system should I use for the HDDs? I am thinking of ext4 or xfs, as I heard btrfs is not recommended for my use case for some reason.

Do you all have some advice to give on what file system to use, as well as some other tips?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 4 days ago* (last edited 4 days ago) (2 children)

I would just skip RAID, add all disk to a single BTRFS and use the built in profiles for (meta)data redundancy.

Cache I don't know much tho.

https://btrfs.readthedocs.io/en/latest/btrfs-device.html

[–] [email protected] 5 points 4 days ago (2 children)

The man page at https://btrfs.readthedocs.io/en/latest/mkfs.btrfs.html says:

RAID5/6 has known problems and should not be used in production.

So those profiles have unknown, unspecified problems.

But btrfs is safe on top of md-based raid1/5/6. It also has the advantage that you only need to encrypt one volume.

[–] [email protected] 1 points 3 days ago (1 children)

Could you elaborate on btrfs on top of md raid?

This one seems the most likely solution for me.

[–] [email protected] 6 points 3 days ago (2 children)

Sure. First you set up a RAID5/6 array in mdadm. This is a purely software thing, which is built into the Linux kernel. It doesn't require any hardware RAID system. If you have 3-4 drives, RAID5 is probably best, and if you have 5+ drives RAID6 is probably best.

If your 3 blank drives are sdb1, sdc1, and sdd1, run this:

mdadm --create --verbose /dev/md0 --level=5 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1

This will create a block device called /dev/md0 that you can use as if it were a single large hard drive.

mkfs.btrfs /dev/md0

That will make the filesystem on the block device.

mkdir /mnt/bigraid
mount /dev/md0 /mnt/bigraid

This creates a mount point and mounts the filesystem.

To get it to mount every time you boot, add an entry for this filesystem in /etc/fstab

[–] [email protected] 1 points 3 days ago (2 children)

Do you need to do some maintenance to keep the data in the array intact?

I read of some btrfs scrub commands and md checks and such, but I am unsure how often to do them, and what they actually do.

[–] [email protected] 2 points 23 hours ago

In my system, the raid arrays seem to do periodic data scrubbing automatically. Maybe it's something that's part of Debian, or maybe it's just a default kernel setting. I don't think it helps much with data integrity -- I think it helps more just by ensuring the continued functionality of the drives.

When it's running, you can type cat /proc/mdstat to see the progress.

That command will also show you if there is a failing drive, so that you can replace it.

[–] [email protected] 1 points 1 day ago

You should scrub your data regularly with btrfs. That's just a mean to verify the data is in-tact though; to detect corruption.

You cannot really do anything actively to keep the data in-tact. Failure can and will happen. To keep your data safe, you must plan for failure to happen:

Expect a power surge to fry all your disks at the same time.
Expect your house to burn down or flood.
Expect to run the wrong command and istantly hose your entire array.
Expect your backup server to get ransomware'd.
...

Only if you effectively mitigate these dangers will your data stay safe.

[–] [email protected] 1 points 3 days ago

Thanks for the info!

[–] [email protected] 1 points 3 days ago

Ops. Missed that part.

[–] [email protected] 2 points 4 days ago (2 children)

Are there some advantages of btrfs over raid? I understand how raid works but btrfs for redundancy is foreign to me.

[–] [email protected] 5 points 4 days ago

BTRFS has RAID built into the file system - instead of using MD you use BTRFS profiles which tell the system how to handle data.

For instance

  • file system data (critical for the file system to function): raid1c3 which means 3 copies of core P file system data on 3 different devices
  • user data: raid1 (so duplicating all your data on two different devices)

With this set up you could lose one device (of n, the total doesn’t matter), and not lose any data, and still be able to boot to recover with too much hassle.

BTRFS does block checksums, can scan for bit rot and recover from it, and generally tries to make your data safe. It technically supports raid5/6 for user data, the issue is around unclean shutdowns and a potential write hole where you could lose data, but if your system has a UPS backup and is on a relatively recent kernel it’s not any more dangerous than MD raid5/6 as I understand it.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago)

I use BTRFS for snapshots, and auto compression. Maybe it can be done with raids with LVM? AFAIK BTRFS redundancy is basically the same as traditional RAID, similar to using mdadm. Still, you would want a backup strat instead relying on the disk redundancy. I learn that the hardway.