this post was submitted on 06 Mar 2024
41 points (97.7% liked)

Selfhosted

39435 readers
4 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have a used Lenovo Thinkcenter 910 with a i5 7500T running Proxmox with Linux Mint in one VM and Adguard running in another. I’m just getting started so I am reading/searching for tons of answers still

I was hoping to host Jellyfin within Linux Mint. It works pretty well, but I did notice while watching a movie the CPU was pretty well pegged out. I wanted to enable hardware based acceleration but when I started reading setup guides to hope to understand what I was doing, I think I may have painted myself into a corner already.

I think I need to tell Proxmox to pass the hardware acceleration on to Linux, and then get Linux to, but also some of the things I have read make it sound like I need to have set up the VM from the beginning this way.

Am I trying to do this the hard way somehow? Does anyone have any suggestions on the best guide to follow for this?

top 18 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 8 months ago* (last edited 8 months ago) (4 children)

Check out the following link - I am pretty sure its what I used to get it all working.

https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/

[–] [email protected] 3 points 8 months ago* (last edited 8 months ago)

There’s no need for adding all of those flags to your kernel command line - just the ones below will do the job:

intel_iommu=on iommu=pt video=efifb:off modprobe.blacklist=snd_hda_intel,snd_hda_codec_hdmi,i915

OP just needs to be aware that turning off the EFI framebuffer as above will result in no video output for the Proxmox host.

If you need further IOMMU group separation and your motherboard doesn’t support ACS, then you can add:

pcie_acs_override=downstream

If you run into problems with booting Proxmox, then you can simply remove the lines above at boot and then troubleshoot after.

[–] [email protected] 1 points 8 months ago

So this definitely seems like the guide that helped the most. I spent more hours than I would like to admit working on this over this weekend.

I am trying to figure out if I have a way to do what I want to here. When it was all said and done I could no longer log into the remote computer I was using to run my Jellyfin server. I have been messing around a little bit using a Linux PC with that, and enjoyed that aspect of things, but it seems like once you get hardware acceleration going, you cannot see the desktop anymore. There were several warnings about this so I wasn't entirely surprised when it happened.

I think I am going to end up needing to get rid of the proxmox part, and just run Linux directly on the computer if I want to do this, and remote into it. It is currently not hooked up to a monitor and is actually on top of my kitchen cabinets out of the way. It was a fun challenge and I learned a lot I think.

[–] [email protected] 1 points 8 months ago

second to this, and to add on that while it's gotten easier (and yes, even with this guide, it's much easier than it used to be) back up your system. Hopefully nothing goes wrong, but you are messing with the kernel, make sure you're mentally prepared to have to build it all from the ground up.

[–] [email protected] 1 points 8 months ago

Thanks for this! Looking forward to trying it out!

[–] [email protected] 10 points 8 months ago* (last edited 8 months ago)

Proxmox has an official guide on how to do this, which inlcudes these examples.

There's also a video from Jim's Garage where he sets up GPU passthrough in an unprivileged LXC container within proxmox.

See if any of that helps.

[–] [email protected] 3 points 8 months ago

I did this recently and I wish I could answer you, but I'm on mobile and don't remember exactly what got it working. I also referenced the guide linked below, along with the proxmox documentation.

If you start blacklisting drivers, you've gone too far for passing through Intel quicksync. I think I'm the end it was a pretty basic config, like checking motherboard settings and adding text to the grub config.

Also don't guides say you have to use q35 as the machine type for the VM, but that didn't work for me. Only 440fx works for me.

[–] [email protected] 1 points 8 months ago (1 children)

Don't do that. Run Jellyfin in its own VM with a GPU passed though via pcie passthough.

[–] [email protected] 2 points 8 months ago (1 children)

That's going to be almost impossible to do with an iGPU. Makes way more sense to pass through to LXC.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago) (1 children)

It takes about 2-3 clicks. What do you mean impossible? LXC is likely faster but it takes more setup.

[–] [email protected] 1 points 8 months ago (1 children)

2-3 clicks? That's hilarious!

These are the steps it actually takes: https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/

That's the best case scenario where it actually works without significant issues, which I am told is rarely the case with iGPUs.

In my case it was considerably more complicated as I have two GPUs from Nvidia (one used for host display outout), so I needed to block specific IDs rather than whole kernel modules.

Plus you lose display access to the Proxmox server which is important if anything goes wrong. You can also only passthrough to one VM at a time. Compared to using LXC you can passthrough to almost unlimited containers, and still have display output for the host system. It almost never makes sense to use PCIe passthrough on an iGPU.

The reason to do passthrough is for gaming on Windows VMs. Another reason is because Nvidia support on Proxmox is poor.

This is a guide to do passthrough with LXC: https://blog.kye.dev/proxmox-gpu-passthrough

It's actually a bit less complicated for privileged LXC, as they are having to work around the restrictions of unprivileged LXC containers.

[–] [email protected] 1 points 8 months ago (1 children)

Its always worked well for me. I passthough my dedicated graphics and USB controller to a Pop os VM and then the integrated graphics to the Jellyfin VM. I initially had to enable virtualization extensions and for the dedicated graphics there was a bit more setup but for the most part it is reasonable.

[–] [email protected] 1 points 8 months ago (1 children)

My point is it's not actually much (or potentially any) simpler to use PCIe passthrough than using an LXC. Yet it comes with more resource usage and more restrictions. Some hardware is more difficult to pass through, especially with iGPUs. I don't even think all iGPUs even use PCIe.

[–] [email protected] 1 points 8 months ago (1 children)

iGPUs are incredibly easy to pass though and are PCIe devices.

[–] [email protected] 1 points 8 months ago (1 children)

Not all of them. Have a look at a Raspberry Pi or Apple Silicon devices. In fact most ARM SoCs I am fairly sure don't use PCIe for their iGPUs. This makes sense when you think about the unified memory architecture some of these devices use. Just in case you aren't aware Proxmox does indeed run on a raspberry pi, and I am sure they will gain support for more ARM devices in the future. Though I believe an x86 device with unified memory could also have problems here.

[–] [email protected] 1 points 8 months ago (1 children)

If it wasn't connected via PCIe how would it talk to the GPU. Anyway Proxmox does in fact not officially support ARM so that is a pretty miniscule use case. I'm not even sure why you would want Proxmox on a low powered device.

For me PCIe pass though is the easiest. Virtualization adds little overhead in terms of raw performance so it isn't a big deal. If you prefer LXC that's fine but my initial statement was based on my own experiences.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

AMBA/AXI-bus in the case of the Pi. GPUs existed long before PCIe did lol.

One some x86 systems the CPU and GPU aren't connected with PCIe either. AMD has infinity fabric that they use for things like the Instinct MI300 and some of their other APUs

Edit: Oh yeah also ARM isn't just low power anymore. It's used in data centers and super computers these days. Even if it was there is lots of stuff you can do with a low power node, including running file servers, DNS or Pi hole, web servers, torrent/usenet downloaders, image and music servers, etc. I have also seen them used to maintain cluster quorum after loss of one more powerful node. A two node cluster won't have quorum if one fails, so adding a pi as a third node makes sense.

[–] [email protected] 1 points 8 months ago

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
LXC Linux Containers
PCIe Peripheral Component Interconnect Express

[Thread #584 for this sub, first seen 9th Mar 2024, 13:25] [FAQ] [Full list] [Contact] [Source code]