this post was submitted on 05 Jul 2023
12 points (80.0% liked)

Selfhosted

39435 readers
4 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm currently planning to build a new server after I discovered what my system uses in Idle. As I have to set up a new system anyways I would like to add a NAS to it to manige my storage. Currently I just have a zfs-Pool in proxmox for my Data-Drives and all VMs/Containers that need access have escalated rights and can directly access the pool (and all other storage on proxmox) which is a bit janky and definetly not best practice security-wise. Another negative side effect is that the drives are barely spun down. Thats why I now want to have a Nas as the only System controling the Drive pool. Here's where my question comes up: Should I run TrueNas (scale?) in a VM and pass the drives through somehow (is that possible without mounting them in Proxmox, as I would like them fully controled by the Nas, including running the zfs pool, etc. ?)? Or do I install TrueNas scale and then run Proxmox as a VM inside, would my performance penalty be huge here, would I still be able to pass throught USB/PCI devices (maybe even the cpu's igup to forward that to jellyfin if that's even possible in Proxmox?)?

top 14 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 1 year ago (3 children)

The best way to do this is to run a truenas VM within proxmox and passthrough your HBA into the truenas VM. That will give truenas full control over any drive connected to that HBA. The performance overhead isn't that much so don't worry about it.

[–] [email protected] 3 points 1 year ago (1 children)

In a pinch passing through drives also works with ZFS, but obviously you lose SMART etc. I ran that for a few weeks before I managed to get ACS override working and my ZFS pool got picked up when TrueNAS started.

[–] [email protected] 1 points 1 year ago (1 children)

ZFS not have access to smart only works up until a drive starts acting up. Without SMART ZFS can't accurately determine if a drive is failing and lock the pool in order to prevent further data loss.

[–] [email protected] 1 points 1 year ago

I'm aware, but I'm saying in a pinch it will work and when you pass your HBA fully you won't need to reconfigure anything, at least I didn't

[–] [email protected] 2 points 1 year ago

thanks your suggestion made me find this thread, which I'll try when my new mobo ships: https://forum.proxmox.com/threads/sata-disk-passthrough-with-smart-functionality.65779/post-296310

(I want to avoid an HBA card for idle-power consumption reasons)

[–] [email protected] 1 points 1 year ago

Yep, this is basically my plan once I finish my server.

[–] [email protected] 11 points 1 year ago (1 children)

There have been many posts about people running truenas as a VM in proxmox. There are a few things to consider that I'm not well versed on, so I suggest doing some more in depth research, but it's definitely possible (I did it myself up until the end of last year).

One of the easiest ways to get the hard drives into truenas is to connect them to a raid card running in IT mode, which allows the OS to directly control the drives (do not raid them, truenas wants the raw disks), and then pass the raid card to the truenas VM

[–] [email protected] 3 points 1 year ago (1 children)

Thanks, I tried googling around a bit, but didn't find anything until I looked for Hba-passthrought as suggested by antother comment.

I would like to avoid adding in a pci-card for now as I have enough SATA ports on my mobo and idle-power consumption is one of my main switching reasons, so I'll see if that works: https://forum.proxmox.com/threads/sata-disk-passthrough-with-smart-functionality.65779/post-296310

[–] einsteinx2 2 points 1 year ago

Oh my home NAS I started this way and it worked great. I only bought an HBA card because I needed more ports. Your mobo probably exposes your SATA controller as a PCI-E device that can be used via pass through in a VM. In my case I booted Proxmox off of NVME drive and passed my SATA controller to a Debian VM where I just use simple NFS and Samba for sharing and SnapRAID for drive parity (but TrueNas should work just as well).

I had zero issues with it and when I upgraded to an HBA card I just switched the drives to those ports and switched the PCIE device I was passing through and everything just worked (helps I always mount using partition UUIDs).

[–] [email protected] 4 points 1 year ago

You might want to look into all-in-one zfs, which exports your pool as an NFS share internally and externally. In case you use spinning rust, L2ARC and ZIL are your friends.

[–] [email protected] 1 points 1 year ago (1 children)

My current setup is TrueNAS Scale in Proxmox, drives connected to HBA. Proxmox has ACS override enabled, HBA is passed to the VM.

Please, do yourself a favor and get an HBA. Do not get a PCIe SATA card, they are unreliable.

[–] [email protected] 2 points 1 year ago (1 children)

Wait, isn't a pcie sata card a HBA by design?

[–] [email protected] 1 points 1 year ago

Okay, technically yes, but in this case we mean "proper" HBAs, ie PCie SAS raid cards flashed into passthru/IT mode.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

You may take this video from apalrd's adventures as a good reference. It is using a container instead of a VM so you can leverage Proxmox mount point to mount file system entities (e.g. ZFS datasets) to the container.

I am using a similar setup so I don't have to bother passing through all my HBAs or storage devices. My ZFS pools can live in the root OS, i.e. Proxmox, wtihout much hassle.

load more comments
view more: next ›