Enabling GPU passthrough post launch?

I have a host (ubuntu 24.04lts) with a pair of Nvidia GPUs. installed the driver and the nvidia container toolkit.

I can create new containers (also ubuntu 24.04lts) using -c nvidia.runtime=true and then add the gpu device to the container (incus config device add …) and I can then run nvidia-smi inside the container, install cuda 12.2 and do all the computation stuff I need to do in python using the GPU.

I also have a live container that I created earlier, and didn’t use the nvidia.runtime=true option at creation time. I tried to edit the config and add the device, but I can’t access the GPU inside that container. Is there a way to fix this or do I have to recreate a new container and restore my backup? Hoping I don’t have to start from scratch…

You need to stop the container, set the config and then start it back up.
nvidia.runtime is an option which cannot be set on a running instance.

1 Like

So simple… it worked! Thanks!

can you kindly show the complete commands you used to create an GPU instance ?

Hi!

When you create an Incus container that has direct access to the GPU, you need to

  1. add the NVidia container runtime to the Incus container
  2. add the GPU device to the Incus container.

For the first step, we create a container named mygpucontainer and set the flag nvidia.runtime to true. This instructs Incus to add the NVidia container runtime to this container. By doing do, our new container has access to the NVidia libraries.

$ incus launch images:ubuntu/24.04 mygpucontainer -c nvidia.runtime=true
Launching mygpucontainer
$ 

Is just the runtime enough? No, it isn’t.

$ incus shell mygpucontainer
root@mygpucontainer:~# nvidia-smi 
No devices were found
root@mygpucontainer:~# logout
$

The second step is to give access of the GPU to the Incus container. If there is only one GPU, it’s easy. If you have more, you need to specify which exactly GPU should be made available in the container. We used incus config device to add to the container mygpucontainer a new device now called myfirstgpu which is of type gpu.

$ incus config device add mygpucontainer myfirstgpu gpu
Device myfirstgpu added to mygpucontainer
$ incus shell mygpucontainer
root@mygpucontainer:~# nvidia-smi 
Thu Jul 18 13:38:16 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07              Driver Version: 550.90.07      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
...
$ 
1 Like

Many thanks, that works perfectly.