this post was submitted on 12 Feb 2024
32 points (97.1% liked)

Selfhosted

39435 readers
9 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

with the demise of ESXi, I am looking for alternatives. Currently I have PfSense virtualized on four physical NICs, a bunch of virtual ones, and it works great. Does Proxmox do this with anything like the ease of ESXi? Any other ideas?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago (1 children)

Incus looks cool. Have you virtualised a firewall on it? Is it as flexible as proxmox in terms of hardware passthrough options?

I find zero mentions online of opnsense on incus. πŸ€”

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (4 children)

Yes it does run, but BSD-based VMs running on Linux have their details as usual. This might be what you're looking for: https://discuss.linuxcontainers.org/t/run-freebsd-13-1-opnsense-22-7-pfsense-2-7-0-and-newer-under-lxd-vm/15799

Since you want to run a firewall/router you can ignore LXD's networking configuration and use your opnsense to assign addresses and whatnot to your other containers. You can created whatever bridges / vlan-based interface on your base system and them assign them to profiles/containers/VMs. For eg. create a cbr0 network bridge using systemd-network and then run lxc profile device add default eth0 nic nictype=bridged parent=cbr0 name=eth0 this will use cbr0 as the default bridge for all machines and LXD won't provide any addressing or touch the network, it will just create an eth0 interface on those machines attached to the bridge. Then your opnsense can be on the same bridge and do DHCP, routing etc. Obviously you can passthrough entire PCI devices to VMs and containers if required as well.

When you're searching around for help, instead of "Incus" you can search for "LXD" as it tend to give you better results. Not sure if you're aware but LXD was the original project run by Canonical, recently it was forked into Incus (and maintained by the same people who created LXD at Canonical) to keep the project open under the Linux Containers initiative.

[–] [email protected] 2 points 10 months ago (1 children)

With Incus only officially supported in Debian 13, and LXD on the way out, should I get going with LXD and migrate to Incus later? Or use the Zabbly repo and switch over to official Debian repos when they become available? What's the recommended trajectory, would you say?

[–] [email protected] 2 points 10 months ago (1 children)

It depends on how fast you want updates. I'm sure you know how Debian works, so if you install LXD from Debian 12 repositories you'll be on 5.0.2 LTS most likely for ever. If you install from Zabbly you'll get the latest and greatest right now.

My companies' machines are all running LXD from Debian repositories, except for two that run from Zabbly for testing and whatnot. At home I'm running from Debian repo. Migration from LXD 5.0.2 to a future version of Incus with Debian 13 won't be a problem as Incus is just a fork and stgraber and other members of the Incus/LXC projects work very closely or also work in Debian.

Debian users will be fine one way or the other. I specifically asked stgraber about what's going to happen in the future and this was his answer:

We’ve been working pretty closely to Debian on this. I expect we’ll keep allowing Debian users of LXD 5.0.2 to interact with the image server either until trixie is released with Incus available OR a backport of Incus is made available in bookworm-backports, whichever happens first.

I hope this helps you decide.

[–] [email protected] 2 points 10 months ago

Absolutely. Great intel; thank you!

[–] [email protected] 2 points 10 months ago (1 children)
[–] [email protected] 1 points 10 months ago

Enjoy your 30 min of Incus :P

[–] [email protected] 1 points 10 months ago (1 children)

Very informative, thank you.

I am generally very comfortable with Linux, but somehow this seems intimidating.

Although I guess I'm not using proxmox for anything other than managing VMs, network bridges and backups. Well, and for the feeling of using something that was set up by people who know what they're doing and not hacked together by me until it worked...

[–] [email protected] 2 points 10 months ago (1 children)

I guess I’m not using proxmox for anything other than managing VMs, network bridges and backups.

And LXD/Incus can do that as well for you. Install it an by running incus init it will ask you a few questions and get an automated setup with networking, storage etc. all running and ready for you to create VMs/Containers.

What I was saying is that you can also ignore the default / automated setup and install things manually if you've other requirements.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (1 children)

Okay, I think I found a bit of a catch with Incus or LXD. I want a solution with a web UI, and while Incus has one, it seems to have access control either browser certificate based or with a central auth server. Neither are a good solution for me - I would much prefer regular user auth with the option to use an auth server at some point (but I don't want to take all of this on all at once.)

I hope it's okay that I keep coming back to you with these questions. You seem to be a strong Incus-evangelist. :)

I guess I could only expose the web UI on localhost and create an SSH tunnel in order to use it...? Not so good on mobile though, which is the strongest reason for a webui.

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

You aren't wrong, the WebUI is stateless, it doesn't know of any users nor it stores any other context information.

The certificates are required for the UI client to authenticate with the underlying LXD server itself. Much like the SSH authentication is boils down to creating a public/private key pair and the PK is added to your browser(s) and the public key to the server. I believe this is a good walkthrough of the process for anyone starting out.

At work we use Authelia and HAProxy to get around the need to distribute a certificate for each client / mange our logins with SSO and 2FA. At home I simply use Nginx as a reverse proxy to the WebUI with the proxy_ssl_certificate passing a certificate down to it. Here another configuration example of how to use Nginx to pass the certificate, you can then use Basic HTTP Auth to add a simple username/password to it.

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

Thanks for your patience. I appreciate it and I'm learning a lot. πŸ™

There's a chance yet!

edit: That actually seems simple enough and should integrate nicely with the rest of my network. Cool!

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (1 children)

I have another question, if you don't mind: I have a debian/incus+opnsense setup now, created bridges for my NICs with systemd-networkd and attached the bridges to the VM like you described. I have the host configured with DHCP on the LAN bridge and ideally (correct me if I'm wrong, please), I'd like the host to not touch the WAN bridge at all (other than creating it and hooking it up to the NIC).

Here's the problem: if I don't configure the bridge on the host with either dhcp or a static IP, the opnsense VM also doesn't receive an IP on that interface. I have a br0.netdev to set up the bridge, a br0.network to connect the bridge to the NIC, and a wan.network to assign a static IP on br0, otherwise nothing works. (While I'm working on this, I have the WAN port connected to my old LAN, if it makes a difference.)

My question is: Is my expectation wrong or my setup? Am I mistaken that the host shouldn't be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what's the best practice here?

Thank you for taking a look! 😊

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

Am I mistaken that the host shouldn’t be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what’s the best practice here?

Passing the PCI network card / device to the VM would make things more secure as the host won't be configured / touching the network card exposed to the WAN. Nevertheless passing the card to the VM would make things less flexible and it isn't required.

I think there's something wrong with your setup. One of my machines has a br0 and a setup like yours. 10-enp5s0.network is the physical "WAN" interface:

root@host10:/etc/systemd/network# cat 10-enp5s0.network
[Match]
Name=enp5s0

[Network]
Bridge=br0 # -> note that we're just saying that enp5s0 belongs to the bridge, no IPs are assigned here.
root@host10:/etc/systemd/network# cat 11-br0.netdev
[NetDev]
Name=br0
Kind=bridge
root@host10:/etc/systemd/network# cat 11-br0.network
[Match]
Name=br0

[Network]
DHCP=ipv4 # -> In my case I'm also requesting an IP for my host but this isn't required. If I set it to "no" it will also work.

Now, I have a profile for "bridged" containers:

root@host10:/etc/systemd/network# lxc profile show bridged
config:
 (...)
description: Bridged Networking Profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
(...)

And one of my VMs with this profile:

root@host10:/etc/systemd/network# lxc config show havm
architecture: x86_64
config:
  image.description: HAVM
  image.os: Debian
(...)
profiles:
- bridged
(...)

Inside the VM the network is configured like this:

root@havm:~# cat /etc/systemd/network/10-eth0.network
[Match]
Name=eth0

[Link]
RequiredForOnline=yes

[Network]
DHCP=ipv4

Can you check if your config is done like this? If so it should work.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (1 children)

My config was more or less identical to yours, and that removed some doubt and let me focus on the right part: Without a network config on br0, the host isn't bringing it up on boot. I thought it had something to do with the interface having an IP, but turns out the following works as well:

user@edge:/etc/systemd/network$ cat wan0.network
[Match]
Name=br0

[Network]
DHCP=no
LinkLocalAddressing=ipv4

[Link]
RequiredForOnline=no

Thank you once again!

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

Oh, now I remembered that there's ActivationPolicy= on [Link] that can be used to control what happens to the interface. At some point I even reported a bug on that feature and vlans.

I thought it had something to do with the interface having an IP (...) LinkLocalAddressing=ipv4

I'm not so sure it is about the interface having an IP... I believe your current LinkLocalAddressing=ipv4 is forcing the interface to get up since it has to assign a local IP. Maybe you can set LinkLocalAddressing=no and ActivationPolicy=always-up and see how it goes.

[–] [email protected] 2 points 10 months ago (1 children)

You know your stuff, man! It's exactly as you say. πŸ™

[–] [email protected] 1 points 10 months ago

You're welcome.