GPU passthrough with one GPU

Hi, I wanna ask: If you say “GPU passthrough” here - what do you mean by that.

Can I passthrough one GPU and have it hooked both to the host OS and the guest, or at least just the guest and the host OS has none, and can still run in the background? (and get it eventually back)

Or is the only option to actually have two GPUs, and attach one per instance?

And I guess that potentially differs between VM and container?

Thanks for the help :slight_smile:

You can. If the guest is container, the container won’t take gpu as it own, host can still use the gpu, so does other container. There doesn’t have a term for it yet, still use gpu passthrough. If guest is vm, the vm will take gpu as it own, it’s real gpu passthrough, you can read pve doc about how to do it.

1 Like

ok, thanks a lot.

That’s a lot better than expected, and I hope other people get that without being explicitly mentioned.

Is there some info on how this is implemented, so I could link it to this article that I referenced to, in order to make it easier to understand for others?

I’ve never found one. I tried a lot to find out how it works.

1 Like

For virtual machines, it relies on the VFIO kernel modules (specifically: vfio, vfio_iommu_type1, vfio_pci). Which modules you need to load is distribution dependent iirc, some have all the modules built-in and some don’t. You can check by cat /boot/config-... | grep CONFIG_VFIO, entries with m require you to explicltly load it in with /etc/modules-load.d, with /etc/modules, or etc…

You can take a gander at some GPU Passthrough articles or the Arch Wiki page to get a rough idea of the required steps but they should be the same in concept across distributions. Incus manages rebinding the devices with the VFIO modules dynamically so you don’t have to do anything. You do a before and after comparison of your device with lspci -k (and if you know what PCI address you’re device is: lspci -k -s ...)

There isn’t a definitive source on the VFIO topic available on the web, but you could probably dive into the KVM and Qemu mailing lists for a history of the subject; if it also helps, VFIO replaced the old PCI Stub drivers.

For container instances, it’s simpler to think of it as just processes instead of being its own isolated system. You aren’t passing anything through to a separate system, you’re just exposing the path to a set of isolated processes; it’s still the same kernel after all.

1 Like