Enabling GPU passthrough post launch?

I have a host (ubuntu 24.04lts) with a pair of Nvidia GPUs. installed the driver and the nvidia container toolkit.

I can create new containers (also ubuntu 24.04lts) using -c nvidia.runtime=true and then add the gpu device to the container (incus config device add …) and I can then run nvidia-smi inside the container, install cuda 12.2 and do all the computation stuff I need to do in python using the GPU.

I also have a live container that I created earlier, and didn’t use the nvidia.runtime=true option at creation time. I tried to edit the config and add the device, but I can’t access the GPU inside that container. Is there a way to fix this or do I have to recreate a new container and restore my backup? Hoping I don’t have to start from scratch…

You need to stop the container, set the config and then start it back up.
nvidia.runtime is an option which cannot be set on a running instance.

1 Like

So simple… it worked! Thanks!