Enabling GPU passthrough post launch?

Hi!

When you create an Incus container that has direct access to the GPU, you need to

  1. add the NVidia container runtime to the Incus container
  2. add the GPU device to the Incus container.

For the first step, we create a container named mygpucontainer and set the flag nvidia.runtime to true. This instructs Incus to add the NVidia container runtime to this container. By doing do, our new container has access to the NVidia libraries.

$ incus launch images:ubuntu/24.04 mygpucontainer -c nvidia.runtime=true
Launching mygpucontainer
$ 

Is just the runtime enough? No, it isn’t.

$ incus shell mygpucontainer
root@mygpucontainer:~# nvidia-smi 
No devices were found
root@mygpucontainer:~# logout
$

The second step is to give access of the GPU to the Incus container. If there is only one GPU, it’s easy. If you have more, you need to specify which exactly GPU should be made available in the container. We used incus config device to add to the container mygpucontainer a new device now called myfirstgpu which is of type gpu.

$ incus config device add mygpucontainer myfirstgpu gpu
Device myfirstgpu added to mygpucontainer
$ incus shell mygpucontainer
root@mygpucontainer:~# nvidia-smi 
Thu Jul 18 13:38:16 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07              Driver Version: 550.90.07      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
...
$ 
1 Like