LXD 5.1 fails to start container when nvidia.runtime = "true"

It all started with a nice blog post by Simos, but it doesn’t fit my environment.

All the months of Googling helps not thus my only two chances are to ask here or to issue on GitHub. However I’m not yet confident enough to fire up a GitHub issue…

I’ll try to show as many details as possible. Hope this helps developers to reproduce and locate this problem, which could be really helpful to me. Thanks in advance!

1. Basic info:

System: OpenSUSE Leap 15.3

GPU: NVIDIA GeForce RTX 2070

GPU driver: Here. I installed “x11-video-nvidiaG06” and “nvidia-glG06” package.

NVIDIA Container Runtime: Here. I don’t need docker on this machine so I merely ran “sudo zypper in nvidia-container-runtime” which installed 4 more packages “libnvidia-container1” “libnvidia-container-tools” “nvidia-container-runtime” and “nvidia-container-toolkit”.

LXD and Snap version info:
user@host:~> lxd version
5.1
user@host:~> snap version
snap 2.55.5-lp153.1.1
snapd 2.55.5-lp153.1.1
series 16
opensuse-leap 15.3
kernel 5.3.18-150300.59.68-default
user@host:~> snap list
Name Version Rev Tracking Publisher Notes
core 16-2.54.4 12834 latest/stable canonical✓ core
core18 20220428 2409 latest/stable canonical✓ base
core20 20220329 1434 latest/stable canonical✓ base
lxd 5.1-1f6f485 23037 latest/stable canonical✓ -
persepolis 3.2.0 43 latest/stable spacedriver88 -
snapd 2.55.3 15534 latest/stable canonical✓ snapd
user@host:~>

2. Proof of everything is normal on host system:

user@host:~> groups
users kvm libvirt lxd

user@host:~> nvidia-smi
Sun May 22 00:15:13 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 515.43.04 Driver Version: 515.43.04 CUDA Version: 11.7 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:07:00.0 On | N/A |
| N/A 61C P8 10W / N/A | 450MiB / 8192MiB | 6% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2563 G /usr/bin/X 150MiB |
| 0 N/A N/A 3007 G /usr/bin/kwin_x11 57MiB |
| 0 N/A N/A 3011 G /usr/bin/plasmashell 36MiB |
| 0 N/A N/A 3209 G …AAAAAAAAA= --shared-files 2MiB |
| 0 N/A N/A 3968 G /usr/local/bin/firefox 197MiB |
±----------------------------------------------------------------------------+

user@host:~> nvidia-container-cli info
NVRM version: 515.43.04
CUDA version: 11.7

Device Index: 0
Device Minor: 0
Model: NVIDIA GeForce RTX 2070
Brand: GeForce
GPU UUID: GPU-#uuid-hidden#
Bus Location: 00000000:07:00.0
Architecture: 7.5
user@host:~>

Please note that on OpenSUSE the default group of a normal user is “users”, gid 100, while the user uid is 1000.
On the host system, I don’t have to be in “videos” group to run “nvidia-smi” or other similar command and no “sudo” is needed.

3. Info about the malfunctioned lxd container:

Its profile: ( In order to make it minimal and convinceable, I didn’t add the pulse audio part and eth0 part. Otherwise the error log will say error renaming nic stuff which is confusing.)

user@host:~> lxc profile show test
config:
environment.DISPLAY: :0
nvidia.driver.capabilities: all
nvidia.runtime: “true”
raw.idmap: |
uid 1000 1000
gid 100 1000
security.idmap.isolated: “true”
security.nesting: “true”
description: ncr test
devices:
nvidia_gpu:
gid: “44”
type: gpu
root:
path: /
pool: default
type: disk
x11_video:
bind: container
connect: unix:/tmp/.X11-unix/X0
listen: unix:/tmp/.X11-unix/X0
mode: “0777”
security.gid: “100”
security.uid: “1000”
type: proxy
x11_xauth_root:
path: /root/.Xauthority
readonly: “true”
source: /home/user/.Xauthority
type: disk
x11_xauth_ubuntu:
path: /home/ubuntu/.Xauthority
readonly: “true”
source: /home/user/.Xauthority
type: disk
name: test
used_by:
user@host:~>

And here is how it runs:

user@host:~> lxc launch images:ubuntu/jammy test --profile=test
Creating test

The instance you are starting doesn’t have any network attached to it.
To create a new network, use: lxc network create
To attach a network to an instance, use: lxc network attach

Starting test
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart test /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/test/lxc.conf:
Try lxc info --show-log local:test for more info

user@host:~> lxc info --show-log local:test
Name: test
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2022/05/22 00:39 +08
Last Used: 2022/05/22 00:40 +08

Log:

lxc test 20220521164001.968 ERROR utils - utils.c:lxc_can_use_pidfd:1813 - Invalid argument - Kernel does not support waiting on processes through pidfds
lxc test 20220521164001.970 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc test 20220521164001.970 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc test 20220521164001.974 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc test 20220521164001.975 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc test 20220521164001.978 WARN cgfsng - cgroups/cgfsng.c:fchowmodat:1252 - No such file or directory - Failed to fchownat(40, memory.oom.group, 65536, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc test 20220521164001.160 ERROR conf - conf.c:run_buffer:321 - Script exited with status 1
lxc test 20220521164001.160 ERROR conf - conf.c:lxc_setup:4400 - Failed to run mount hooks
lxc test 20220521164001.160 ERROR start - start.c:do_start:1275 - Failed to setup container “test”
lxc test 20220521164001.160 ERROR sync - sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 4)
lxc test 20220521164001.161 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:877 - Received container state “ABORTING” instead of “RUNNING”
lxc test 20220521164001.161 ERROR start - start.c:__lxc_start:2074 - Failed to spawn container “test”
lxc test 20220521164001.161 WARN start - start.c:lxc_abort:1045 - No such process - Failed to send SIGKILL to 22797
lxc test 20220521164006.164 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc test 20220521164006.165 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 20220521164006.184 ERROR af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20220521164006.184 ERROR commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors for command “get_state”

user@host:~>

If I didn’t do anything wrong, this container should launch normally and has a functional GPU hooked inside it.

If I delete nvidia.runtime = “true” line in the profile, the container will launch.
After that, if I run “lxc config set test nvidia.runtime true”, no error message is shown and I can comfirm that “/dev/dri/card0” and “/dev/dri/renderD128” inside container is there.

But that doesn’t take any effect before “lxc restart test”, and the restart always fails and gives the same error log as above.

4. Proof of the problem lies in the nvidia.runtime but nothing else:

Actually, if I use the old method shown below, I can get GPU accelerated applications running inside container without any problem.

But it’s really tiring to keep the version of GPU driver inside container same as host. Because It usually takes weeks for Ubuntu ( container OS, using launchpad driver repo ) to update its GPU driver while OpenSUSE ( host OS, using NVIDIA official driver repo ) usually gets latest update in several days after NVIDIA’s official announce…

The working profile:

Have to manually install the same version GPU driver as the host inside container. And cause it’s inside a container, NVIDIA’s official deiver “xx.run” file doesn’t work, I have to install driver via package manager ( which is apt here ).

By the way, if xauth fails, remember to change the hostname of your container the same as your host.

I may call them “ubuntu@container” and “user@host” here in order to make it clear, but actually their hostname are the same.

user@host:~> lxc profile show gpuinstance
config:
environment.DISPLAY: :0
environment.PULSE_SERVER: unix:/home/ubuntu/.pulse-native
nvidia.driver.capabilities: all
raw.idmap: |
uid 1000 1000
gid 100 1000
security.idmap.isolated: “true”
security.nesting: “true”
description: Ubuntu 22.04 LTS
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
nvidia_gpu:
gid: “44”
type: gpu
pulse_audio:
bind: container
connect: unix:/run/user/1000/pulse/native
gid: “1000”
listen: unix:/home/ubuntu/.pulse-native
mode: “0777”
security.gid: “100”
security.uid: “1000”
type: proxy
uid: “1000”
root:
path: /
pool: default
type: disk
x11_video:
bind: container
connect: unix:/tmp/.X11-unix/X0
listen: unix:/tmp/.X11-unix/X0
mode: “0777”
security.gid: “100”
security.uid: “1000”
type: proxy
x11_xauth_root:
path: /root/.Xauthority
readonly: “true”
source: /home/user/.Xauthority
type: disk
x11_xauth_ubuntu:
path: /home/ubuntu/.Xauthority
readonly: “true”
source: /home/user/.Xauthority
type: disk
name: gameboy
used_by:

  • /1.0/instances/gpuinstance
    user@host:~>

5. End here:
Everything I think is releated to this problem is posted above.
Feel free to ask for more info, I’m glad to share and hope it helps.
English isn’t my native language. Sorry if there’s any typo I didn’t notice…

Just wondering if you ever figured this out? I have the same issue with LXD 5.9.

Thanks!

Unfortunately no. Still have to install the same version driver inside container.

If the host and container use the same distribution, this would be easier. But for different OSs, they release driver under different speed or version, very hard to get it matched.

Currently I lock the version of driver packages and quit using the latest driver.