this post was submitted on 17 Jul 2023
21 points (95.7% liked)

Selfhosted

39435 readers
6 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I’m working on setting up my first homelab. I have an older dell optiplex with a duel PCIe NIC in it. I was wondering if I could setup OPNsense as a docker container or virtual machine so that I could also use the extra resources of the box for other things besides just being a router. Is this a good idea?

top 16 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 1 year ago

Hey, as others have said, you can definitely set up OPNSense in a VM and it works great. I wanted to take a second and answer the first part of your question: it cannot run in Docker. Containers in Docker share their kernel with the Linux host machine. Since OPNSense isn’t a Linux distribution (it’s based on FreeBSD), it can’t make use of the shared Linux kernel.

[–] [email protected] 6 points 1 year ago (2 children)

Yeah, this is perfectly doable. I ran a very similar setup for a while. I'd recommend passing one of the NICs directly through to the VM and using one for the host to keep it simple, but you can also virtualize the networking if you need something more complex. If you do pass through a single NIC, you'll need a switch capable of handling VLANs and a bit of knowledge on how to set up what's called a "router on a stick" with everything trunked over one connection and only separated by VLANs.

Keep in mind, while this is a great way to save resources, it also means these systems are sharing resources. If you need to reboot, you're taking everything down. If you have other users, that might be annoying for everyone involved.

[–] wiggles 1 points 1 year ago (1 children)

I have a managed switch. I’m a little confused how everything would be hooked up if I’m using a vm for pfsense and another vm for some Linux distro. I want the router and that distro to be isolated from my other vlans. Could I use the onboard nic hooked up to the switch to put the distro on its own vlan?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

You can absolutely attach each VM and even the host to separate NICs which each connect back to the switch and has its own VLAN. You can also attach everything to one NIC and just use a virtual bridge(s) on the host to connect everything. Or any combination therein. You have complete freedom on how you want to do it to suit your needs. How this is done depends on what you're using on the host for a hypervisor though, so I can't give you exact directions.

One thing I should have thought of before; if two NICs are on one single PCI card, you probably can't pass them through to the VM independent of one another. So that would limit you to doing virtual networking if you want to split them.

[–] [email protected] -2 points 1 year ago (2 children)

Passing through a NIC just adds complexity, not lessens it. And is a bad idea for a plethora or reasons

[–] [email protected] 2 points 1 year ago (1 children)

I would strongly disagree. In terms of setting up OPNSense (I use pfSense, but same concept), it's easier to just do a PCI passthrough. The alternative is to create a virtual network adapter on your hypervisor, bridge it to a physical NIC, and bind the virtual adapter to the VM. The only advantage to be gained from that is being able to switch between physical NICs without reconfiguring the OPNSense installation. For someone with a homelab, when would you ever need to do that?

My Proxmox server uses a 10Gb PCIe adapter for its primary network interface. The onboard NICs are all passed through to pfSense; I've never had any need to change that, and it's been that way for years.

I don't mean this to sound overly critical, and I'm happy to be proven wrong. I just don't see a "plethora of reasons" why doing PCI passthrough on a NIC is a bad idea.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I'm happy to discuss it, as I've written articles about it.

I live high level routing and firewalling in VMs (60 Gbps+), and there are a couple of realities you need to accept, especially when you involved a *BSD in the mix.

  1. *BSD's networking drivers and, to a lesser degree, the whole stack SUUUCK. This becomes extra poignant when you involve *pf, which is incredible for hand editing, but also horrible for performance because it's a straight top-to-bottom list.
  2. We could argue about the whole networking stack sucking all day, but in reality, it's the driver situtation that really brings it down. That's why "You must buy Intel" is such a mantra on *BSD. Because they are about the only drivers which don't make for a completely horrible experience. You can meme about how terrible Realtek is, but really it's only terrible on *BSD. It's a first-class linux citizen, and often supports better hardware features than the ancient X520, pre-Connect-4, etc people circle-jerk about. And if you often losing out on cool new features/offloads/abilities.
  3. The virtio drivers are usually more efficient and performant than most physical hardware drivers (on *BSD)
  4. You asked "why would anyone ever need to do that?". It's simple. High availability. You can run two router/firewall VMs on two different hosts and have zero downtime. Or, if you only want one, you can migrate the VM either manually or automagically, and only suffer the downtime for a reboot as the VM moves to a different host. You can share the same physical NIC between multiple VMs with SR-IOV for maximum low-latency networking, aka storage. It's a waste throwing 10Gb at just pfSense when it'll be idle most of the time, and with older hardware pfSense isn't going to even be able to hit half of that.
  5. Your VM just works if you ever have to move it to another host. Your main routing and firewall VM is now tied to a single specific host. In a disaster recovery situation, this is going to make you hate yourself as you basically end up needing to either physically pull a card and re-setup passthrough, or setup passthrough on a new card, make sure the VM is bound to those MACs. When it's fully virtualized, it's hardware agnostic. Your VM may think it's 10Gb on a single link, but underneath the links are high availability (aka vSphere vDS), on different VLANs, etc. My example here is a few years ago where I swapped in a Z8350 WYSE 3040 when my main router died with 40Gb uplinks. Sure, I was limping for a few days, but as far as my router is concerned, there is no difference.
  6. NUMA becomes an issue. Even single processors have NUMA nodes now, and it wouldn't be difficult for someone not knowing was a NUMA node is to create a NUMA issue, where you incur huge penalties going from CPU/Chipset to RAM to NIC and back again, depending on where the items are physically arranged in the system. This is doubly poignant in the *BSD world.
  7. If a 1Gb interface is your bottleneck, your network design is broken. There is no reason for most people in a homelab to try and route >1Gbps on your edge. Don't packet inspect it, and internally you are up to 10Gbps and beyond. Sure, a >1Gbps link might be a reason in 2023, but what's your 95th percentile, like 25Mbps if you are lucky. It's only "hawt" for your speedtest numbers, and an occasional download. And you can do 10Gbps pretty easily with virtio on basically any semi-modern system especially with the large files that most people would want 10Gb for, and not dedicate a PCIe slot to it and make it portable.

I mean, you do you. But I'd much rather to just be able to change the uplink on a vSwitch or bridge to get my router going again instead of having to reboot, passthrough, insert grub cli options, swap cards, etc.

[–] [email protected] 1 points 1 year ago

Having tried both, I found it far easier and less troublesome to just add a PCI passthrough than it is to worry about managing the network both on the host and in the VM. As long as FreeBSD supports the driver, I strongly recommend passthrough vs virtualized NICs.

[–] [email protected] 4 points 1 year ago (1 children)

You can do it as a VM.

The only downside is you lose internet while rebooting your server, may not be a big deal though.

[–] [email protected] 1 points 1 year ago

I have opnsense virtualized on a proxmox server with a couple of things that should hardly ever need restarts. It actually works pretty well because the host almost never needs a reboot and rebooting a vm is way faster than bare metal

[–] [email protected] 3 points 1 year ago (1 children)

I have PF sense virtualized with no issues.

[–] [email protected] 2 points 1 year ago

A bit more about mine now that I have a little more time, it's a VM on vmWare, it has two virtual interfaces, on on my DMZ vlan, and the other is a trunk with the rest of my vlans. With the *sense, I have 2 phisical I terfaces, and then virtual interfaces that correspond to the VLANs. My router is plugged into my switch on an access port for the DMZ, and the ESXi hosts are connected to the switch with VLAN trunks. This allows me to migrate the router to another host for reboots.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

I use OPNSense virtualized on top of Proxmox. Each physical interface of the host system (ethX and friends) is in its own bridge (vmbrX), and for each bridge, the OpenSense VM also has a virtual interface that is part of the bridge. It has worked flawlessly for months now.

[–] [email protected] 1 points 1 year ago

I'm doing it as VM running in truenas, it works perfect. The LAN nic is shared between host and OpnSense and the wan is passed through to the VM as hardware.

It's much better than my USG4 pro, so that is next to the server turned off

[–] [email protected] 1 points 1 year ago

Only issue I had with a similar setup is turns out the old HP desktop I bought didn't support VT-d on the chipset, only on the CPU. Had do some crazy hacks to get it to forward a 10gbe NIC plugged into the x16 slot.

Then I discovered the NIC I had was just old enough (ConnectX-3) that getting it to properly forward was finicky, so I had to buy a much more expensive ConnectX-4. My next task is to see if I can give it a virtual NIC, have OPNsense only listen to web requests on that interface, and use the host's Nginx reverse proxy container for SSL.

[–] [email protected] 1 points 1 year ago

Yes, you can. You need a hypervisor that is capable of IOMMU. I know for a fact that you can do it with libvirtd and KVM/qemu. I think you can do it with Proxmox. That much said, I've no experience doing this myself.

load more comments
view more: next ›