this post was submitted on 04 Apr 2025
23 points (100.0% liked)

Selfhosted

46218 readers
253 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

It's been a while since I visited this topic, but a few years back, Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox) was considered for better performance (yes, I've heard of the rumours of AWS moving to KVM from Xen for some appliances).

I would like to ask the community about the security measures you've taken to harden the default PROXMOX and XCP-NG installations. Have you run the CIS benchmarks and performed hardening that way? Did you enable 2FA?

I'm also interested in people who run either of these in production: what steps did you take? Did you patch the Debian base (for PVE)/Fedora base (I think, for XCP)?

Thank you for responding!

you are viewing a single comment's thread
view the rest of the comments
[–] moonpiedumplings 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Firstly, Xen is considered by secure by Qubes — but that's mainly the security of the hypervisor and virtualization system itself. They make a very compelling argument that escaping a Xen based virtual machine is going to be more difficult than a KVM virtual machine.

But threat model matters a lot. Qubes aims to be the most secure OS ever, for use cases like high profile journalists or other people who absolutely need security, because they will literally get killed without it.

Amazon moved to KVM because, despite the security trade off's, it's "good enough" for their usecase, and KVM is easier to manage because it's in the Linux kernel itself, meaning you get it if you install Linux on a machine.

In addition to that, security is about more than just the hypervisor. You noted that Promox is Debian, and XCP-NG is Centos or a RHEL rebuild similar to Rocky/Alma, I think. I'll get to this later.

Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox)

I did some research on this, and was planning to make a blogpost and never got around to making it. But I still have the draft saved.

Name Summary Full Article Notes
Performance Evaluation and Comparison of Hypervisors in a Multi-Cloud Environment Compares WSL (kind of Hyper-V), VirtualBox, and VMWare-Workstation. springer.com, html Not honest comparison, since WSL is likely using inferior drivers for filesystem access, to promote integration with host.
Performance Overhead Among Three Hypervisors: An Experimental Study using Hadoop Benchmarks Compares Xen, KVM, and an unnamed commercial hypervisor, simply referred to as CVM. pdf
Hypervisors Comparison and Their Performance Testing (2018) Compares Hyper-V, XenServer, and vSphere springer.com, html
Performance comparison between hypervisor- and container-based virtualizations for cloud users (2017) Compares xen, native, and docker. Docker and native have neglible performance differences. ieee, html
Hypervisors vs. Lightweight Virtualization: A Performance Comparison (2015) Docker vs LXC vs Native vs KVM. Containers have near identical performance, KVM is only slightly slower. ieee, html
A component-based performance comparison of four hypervisors (2015) Hyper-V vs KVM vs vSphere vs XEN. ieee, html
Virtualization Costs: Benchmarking Containers and Virtual Machines Against Bare-Metal (2021) VMWare workstation vs KVM vs XEn springer, html Most rigorous and in depth on the list. Workstation, not esxi is tested.

The short version is: it depends, and they can fluctuate slightly on certain tasks, but they are mostly the same in performance.

default PROXMOX and XCP-NG installations.

What do you mean by hardening? If you are talking about hardening the management operating system (Proxmox's Debian or XCP's RHEL-like), or the hypervisor itself?

I agree with the other poster about CIS hardening and generally hardening the base operating system used. But I will note that XCP-NG is more designed to be an "appliance" and you're not really supposed to touch it. I wouldn't be suprised if it's immutable nowadays.

For the hypervisor itself, it depends on how secure you want things, but I've heard that at Microsoft Azure datacenters, they disable hyperthreading because it becomes a security risk. In fact, Spectre/Meltdown can be mitigated by disabling hyper threading. Of course, their are other ways to mitigate those two vulnerabilities, but by disabling hyper threading, you can eliminate that entire class of vulnerabilities — at the cost of performance.

[–] [email protected] 1 points 2 weeks ago (1 children)

Thank you for the wonderful comment. I am talking about the operating system (Debian vs CentOS if I remember correctly) when I mention Hardening.

I haven't seen a concrete example of anyone applying CIS policies on the XCP-NG base, neither have I seen any mentions of securing the XCP-NG base by companies using them in production. I understand that having a walled-off dom0 is great and I like that about Xen, but not seeing dialogue on base OS level security is making me a bit uncomfortable about XCP-NG. Not sure if it is immutable, if it is then that would relieve some of my worries.

Personally, I think Proxmox is somewhat unsecure too. I believe something like following relevant STIG recommendations, kernel self-protection, hardened malloc and other things (there's a huge list but I'll be brief) should be essential. Ideally I would've preferred that the Proxmox project took some of the measures that the Kicksecure project does in hardening debian but I don't see any mention of something like that. If I end up wanting to run Proxmox, I'll install Debian, distro-morph it to Kicksecure and then follow the instructions for Proxmox (not sure how I'll keep from using the Proxmox custom kernel but we'll see).

[–] moonpiedumplings 2 points 2 weeks ago (1 children)

Personally, I think Proxmox is somewhat unsecure too.

Proxmox is unique from other projects, in it's much more hacky, and much of the stack is custom rather than standards. Like for example: For networking, they maintain a fork of the Linux's older networking stack, called ifupdown2, whereas similar projects, like openstack, or Incus, use either the standard Linux kernel networking, or a project called openvswitch.

I think Proxmox is definitely secure enough, but I don't know if I would really trust it for higher value usecases due to some of their stack being custom, rather than standard and mantained by the wider community.

If I end up wanting to run Proxmox, I’ll install Debian, distro-morph it to Kicksecure

If you're interested in deploying a hypervisor on top of an existing operating system, I recommend looking into Incus or Openstack. They have packages/deployments than can be done on Debian or Red Hat distros, and I would argue that they are designed in a more secure manner (since they include multi tenancy) than Proxmox. In addition to that, they also use standard tooling for networking, like both can use Linux Bridge (in-kernel networking) for networking operations.

I would trust Openstack the most when it comes to security, because it is designed to be used as a public cloud, like having your own AWS, and it is deployed with components publicly accessible in the real world.

[–] [email protected] 1 points 2 weeks ago (1 children)

I had looked into openstack a while back but left it thinking it was too complex. I was looking at Apache's Cloudstack then.

I see now that a contributor has got Debian in the official list of supported distributions. Which means my distro-morphing idea should work in theory with OpenStack. This is a great idea, thanks. I will look at OpenStack more seriously now. Does look like it will need some effort though

[–] moonpiedumplings 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Which means my distro-morphing idea should work in theory with OpenStack

I also don't recommend doing a manual install though, as it's extremely complex compared to automated deployment solutions like kolla-ansible (openstack in docker containers), openstack-ansible (host os/lxc containers), or openstack-helm/genestack/atmosphere (openstack on kubernetes). They make the install much more simpler and less time consuming, while still being intensely configurable.

[–] [email protected] 1 points 2 weeks ago (1 children)

I see. But does the installation cover hardening steps like hardened_malloc, permission hardener, kernel self-protection etc?

[–] moonpiedumplings 1 points 2 weeks ago (1 children)

I don't think so, now. You'll have to do those yourself.

[–] [email protected] 1 points 2 weeks ago