Security issue, VM visible on server screen

Hi
I’v a Framework desktop system with a single gpu. I’v created a VM that is attached to that gpu.
That VM runs ubuntu including gui.
The UI of that VM is visible on a monitor attached to the server. In this state the “console” tab in the incus UI is black. This seems to be a security issue to me.

I always expect a monitor attached to the server to be black as long I’ve a VM runming that has control over the GPU)
(FYI: it’s an usb-c connected screen)

I’m not too informed about how the framework desktops work, but I can say that the video out is often coming from the gpu itself (even if it is USB C). That means that through a gpu to a VM in it’s entirety, gives that VM control over the monitor out. There isn’t really a way around this, it’s a hardware limitation.

This can happen with USB c connected stuff as well. My old nvidia hybrid laptop would only be able to use any form of external monitor, including usb C monitors, through the nvidia gpu. It was hardwired.

Of course, it does not work that way on every device. Sometimes video out is attatched to the integrated graphics/motherboard. But the first thing to verify is how it works.

I agree with moonpiedumplings: If you’ve passed a GPU through to a VM, then the output of that VM on the GPU should be visible on a connected monitor. In fact, I’d be quite upset if it didn’t work that way. The VM now controls the GPU, not the host. The output on the GPU is up to the VM.

Now, I could see a complication with an internal GPU and a USB-Video-Passthrough port where the GPU is passed to the VM, but the USB port is not and still available to the host….. but that’s still not very surprising.

I also suspect that VGPU sharing/splitting would behave differently. But, that’s not what you’re doing, is it?

Yes this is a bit weird setup.
I normally use a cloud vm (without UI) with shared GPU (for ollama) that works fine.
Because due to a bug I could not run that config some time ago. I did run ubuntu noble VM (with UI).
Its about sharing a GPU without sharing displays.
So if in future this config is needed, I’ve to accept it works this way, or try to disable usb/hdmi/dp displays from inside the VM.

if you use an container instead, it won’t hijack the display out. As another benefit, you can share a vm between lxc containers.

Another alternative is using GPU virtualization. If you are using a datacenter grade gpu, or hacking a consumer/gaming gpu to get it to pretend to be a datacenter one (doesn’t work on every gpu), you can virtualize slices of a gpu and share them between vm’s. And then they won’t hijack the screen.

What do you mean with ‘you can share a vm between lxc containers’?

I’ haven’t studied GPU virtualization yet (my framework desktop is running properly with a cloud vm).
Will take a look at it if I’ve more time.

sorry. you can share a gpu between lxc containers. Typo.

1 Like