The hook change got me passed the error. Can that be made as a default?
The following failed though,
$incus launch images:ubuntu/24.04 c1
Launching c1
$ incus config device add c1 gpu gpu id=0
Device gpu added to c1
$ incus config set c1 nvidia.driver.capabilities=all nvidia.runtime="true"
$ incus exec c1 -- nvidia-smi
Error: Command not found
I installed nvidia-utils-550 inside the container and then nvidia-smi started to work.
My plan is to run docker inside incus container. For docker to pick the GPU i had to install nvidia-container-toolkit following this.
In addition following things were also required:
- fix-gpu-passthrough.service
# cat /etc/systemd/system/fix-gpu-passthrough.service
[Unit]
Description=Creates Symlink required for LXC/Nvidia to Docker passthrough
Before=docker.service
[Service]
User=root
Group=root
ExecStart=/bin/bash -c 'mkdir -p /proc/driver/nvidia/gpus && ln -s /dev/nvidia0 /proc/driver/nvidia/gpus/0000:02:00.0'
Type=oneshot
[Install]
WantedBy=multi-user.target
- Fix /etc/nvidia-container-runtime/config.toml
# cat /etc/nvidia-container-runtime/config.toml
disable-require = false
[nvidia-container-cli]
environment = []
ldconfig = "@/sbin/ldconfig.real"
load-kmods = true
no-cgroups = true
[nvidia-container-runtime]
log-level = "info"
mode = "auto"
runtimes = ["docker-runc", "runc"]
[nvidia-container-runtime.modes]
[nvidia-container-runtime.modes.csv]
mount-spec-path = "/etc/nvidia-container-runtime/host-files-for-container.d"
With these changes i am able to use GPU in a docker container inside incus container on a nixOS host.