From within a container, does doing an ubuntu-drivers autoinstall usually take such a long time? It is nearly 30 minutes to update a bionic beaver.
Also is it possible to do an nvidia-smi from within the container?
From within a container, does doing an ubuntu-drivers autoinstall usually take such a long time? It is nearly 30 minutes to update a bionic beaver.
Also is it possible to do an nvidia-smi from within the container?
Hi,
You marked this as LXC but I have done this with LXD instead.
From within LXC/LXD containers, you do not and cannot load Linux kernel modules.
The package ubuntu-drivers-common is likely looking for kernel modules but does not find any and gets stuck.
Having said that, if you want to run nvidia-smi
in the container, you can do so (even CUDA).
Here are instructions from @stgraber, https://stgraber.org/2017/03/21/cuda-in-lxd/
For the case of LXD where you want to run games, CUDA, etc in the container with an NVidia GPU and the closed-source drivers:
Well, that’s it. By installing the nvidia driver in the container, you get both the kernel driver but also the user-space libraries that match the kernel driver. The kernel driver/kernel module in the container is not used at all. However, the user-space libraries are matching the kernel driver from the host.
Then, you can run nvidia-smi
, CUDA and games. From within the container.
The end result is to run tensorflow from a container image. I’ve used nvidia-docker instead, and on first blush, nvidia-smi seems to work.
LXD actually supports libnvidia-container which is the same as used by nvidia-docker.
To get the equivalent, you need to have:
nvidia.runtime
config option to true
That last one is what causes libnvidia-container to be called and which will cause nvidia-smi
and related libraries to be made available to the container without needing anything installed directly inside it.