I have a host (ubuntu 24.04lts) with a pair of Nvidia GPUs. installed the driver and the nvidia container toolkit.
I can create new containers (also ubuntu 24.04lts) using -c nvidia.runtime=true and then add the gpu device to the container (incus config device add …) and I can then run nvidia-smi inside the container, install cuda 12.2 and do all the computation stuff I need to do in python using the GPU.
I also have a live container that I created earlier, and didn’t use the nvidia.runtime=true option at creation time. I tried to edit the config and add the device, but I can’t access the GPU inside that container. Is there a way to fix this or do I have to recreate a new container and restore my backup? Hoping I don’t have to start from scratch…
When you create an Incus container that has direct access to the GPU, you need to
add the NVidia container runtime to the Incus container
add the GPU device to the Incus container.
For the first step, we create a container named mygpucontainer and set the flag nvidia.runtime to true. This instructs Incus to add the NVidia container runtime to this container. By doing do, our new container has access to the NVidia libraries.
$ incus shell mygpucontainer
root@mygpucontainer:~# nvidia-smi
No devices were found
root@mygpucontainer:~# logout
$
The second step is to give access of the GPU to the Incus container. If there is only one GPU, it’s easy. If you have more, you need to specify which exactly GPU should be made available in the container. We used incus config device to add to the container mygpucontainer a new device now called myfirstgpu which is of type gpu.