Create LXD VM fail on ZFS storage

Hi,
When I try to create a VM with this command, its startup crashes with this error message:

# lxc launch ubuntu:focal molecule-virtualbox --vm
Creating molecule-virtualbox
Starting molecule-virtualbox
Error: Failed to run: forklimits limit=memlock:unlimited:unlimited fd=3 -- /snap/lxd/22162/bin/qemu-system-x86_64 -S -name molecule-virtualbox -uuid 19a0e265-0cea-4537-b591-18a55aa12dd0 -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/molecule-virtualbox/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/molecule-virtualbox/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/molecule-virtualbox/qemu.pid -D /var/snap/lxd/common/lxd/logs/molecule-virtualbox/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: char device redirected to /dev/pts/0 (label console)
: Process exited with non-zero value 1
Try `lxc info --show-log local:molecule-virtualbox` for more info

# lxc info --show-log local:molecule-virtualbox
Name: molecule-virtualbox
Status: STOPPED
Type: virtual-machine
Architecture: x86_64
Location: lxd-gitlab-runners
Created: 2022/01/06 01:20 CET

Log:

qemu-system-x86_64:/var/snap/lxd/common/lxd/logs/molecule-virtualbox/qemu.conf:307: Could not open '/proc/self/fd/3': filesystem does not support O_DIRECT

# ll /proc/self/fd/3
ls: cannot access '/proc/self/fd/3': No such file or directory

# ll /proc/self/fd/
total 0
dr-x------ 2 root root  0 Jan  6 08:59 ./
dr-xr-xr-x 9 root root  0 Jan  6 08:59 ../
lrwx------ 1 root root 64 Jan  6 08:59 0 -> /dev/pts/0
lrwx------ 1 root root 64 Jan  6 08:59 1 -> /dev/pts/0
lrwx------ 1 root root 64 Jan  6 08:59 2 -> /dev/pts/0
lr-x------ 1 root root 64 Jan  6 08:59 3 -> /proc/451868/fd/

# ll /proc/451868/fd/ /proc/451868/
ls: cannot access '/proc/451868/fd/': No such file or directory
ls: cannot access '/proc/451868/': No such file or directory

# df -hT /proc /var/snap/lxd/common/lxd/disks/zfs_lxd_local.img 
Filesystem                        Type  Size  Used Avail Use% Mounted on
proc                              proc     0     0     0    - /proc
rpool/ROOT/ubuntu_i76z7l/var/snap zfs   1.6T  4.4G  1.6T   1% /var/snap

# mount | grep -E '/proc|/var/snap'
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=43053)
rpool/ROOT/ubuntu_i76z7l/var/snap on /var/snap type zfs (rw,relatime,xattr,posixacl)
tmpfs on /var/snap/lxd/common/ns type tmpfs (rw,relatime,size=1024k,mode=700)
nsfs on /var/snap/lxd/common/ns/shmounts type nsfs (rw)
nsfs on /var/snap/lxd/common/ns/mntns type nsfs (rw)

# cat /etc/fstab 
/dev/disk/by-uuid/B8A6-D21D /boot/efi vfat defaults 0 0
/boot/efi/grub /boot/grub none defaults,bind 0 0
/dev/disk/by-uuid/9f860987-11bc-463a-aa52-052218b85ce2 none swap discard 0 0
tmpfs /dev/shm tmpfs defaults,nodev,nosuid,noexec 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime,hidepid=0 0 0

My physical server is under Ubuntu 20.04 Root on ZFS. Here are my pools:

# zpool list
NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool          1.88G   333M  1.55G        -         -     0%    17%  1.00x    ONLINE  -
rpool          1.61T  13.5G  1.60T        -         -     0%     0%  1.00x    ONLINE  -
zfs_lxd_local   464G  2.98G   461G        -         -     0%     0%  2.19x    ONLINE  -

# zpool status zfs_lxd_local
  pool: zfs_lxd_local
 state: ONLINE
  scan: none requested
config:

	NAME                                                STATE     READ WRITE CKSUM
	zfs_lxd_local                                       ONLINE       0     0     0
	  /var/snap/lxd/common/lxd/disks/zfs_lxd_local.img  ONLINE       0     0     0

errors: No known data errors

# qemu-img info /var/snap/lxd/common/lxd/disks/zfs_lxd_local.img
image: /var/snap/lxd/common/lxd/disks/zfs_lxd_local.img
file format: raw
virtual size: 466 GiB (500000000000 bytes)
disk size: 3.48 GiB

# lxc storage list
+---------------+--------+-------------+---------+---------+
|     NAME      | DRIVER | DESCRIPTION | USED BY |  STATE  |
+---------------+--------+-------------+---------+---------+
| zfs_lxd_local | zfs    |             | 5       | CREATED |
+---------------+--------+-------------+---------+---------+

# lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdfan0
    type: nic
  root:
    path: /
    pool: zfs_lxd_local
    type: disk
name: default

Can you help me solve my problem?

qemu-system-x86_64:/var/snap/lxd/common/lxd/logs/molecule-virtualbox/qemu.conf:307: Could not open ‘/proc/self/fd/3’: filesystem does not support O_DIRECT


Info version:

# lsb_release -ds
Ubuntu 20.04.3 LTS

# snap list
Name    Version   Rev    Tracking       Publisher   Notes
core20  20211129  1270   latest/stable  canonical✓  base
lxd     4.21      22147  latest/stable  canonical✓  -
snapd   2.53.4    14295  latest/stable  canonical✓  snapd

# uname -a
Linux lxd-gitlab-runners 5.4.0-92-generic #103-Ubuntu SMP Fri Nov 26 16:13:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Best regards

Can you show zfs version?

ZFS supports DirectIO on 0.8 or higher and LXD apparently detected that your ZFS version should support it.

Here is the version:

# zfs version
zfs-0.8.3-1ubuntu12.13
zfs-kmod-0.8.3-1ubuntu12.13

# dpkg -l | grep '^ii.*zfs'
ii  libzfs2linux                          0.8.3-1ubuntu12.13                amd64        OpenZFS filesystem library for Linux
ii  zfs-initramfs                         0.8.3-1ubuntu12.13                amd64        OpenZFS root filesystem capabilities for Linux - initramfs
ii  zfs-zed                               0.8.3-1ubuntu12.13                amd64        OpenZFS Event Daemon
ii  zfsutils-linux                        0.8.3-1ubuntu12.13                amd64        command-line tools to manage OpenZFS filesystems

Ok, those versions do support DirectIO so I’m pretty confused as to why this isn’t working here.

Hi,
I installed a new node (3rd node) from scratch with this ZoL guide by chaining with the installation of LXD by connecting it to the cluster.

Same version, same installation, same problem :


root@lxd-gitlab-runners-03:~# lxc launch ubuntu:focal molecule-virtualbox-03 --vm --target lxd-gitlab-runners-03
Creating molecule-virtualbox-03
Starting molecule-virtualbox-03           
Error: Failed to run: forklimits limit=memlock:unlimited:unlimited fd=3 -- /snap/lxd/22162/bin/qemu-system-x86_64 -S -name molecule-virtualbox-03 -uuid 090bf502-75a4-4990-9e6e-6736c5d54940 -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/molecule-virtualbox-03/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/molecule-virtualbox-03/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/molecule-virtualbox-03/qemu.pid -D /var/snap/lxd/common/lxd/logs/molecule-virtualbox-03/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: char device redirected to /dev/pts/0 (label console)
: Process exited with non-zero value 1
Try `lxc info --show-log local:molecule-virtualbox-03` for more info

root@lxd-gitlab-runners-03:~# lxc info --show-log local:molecule-virtualbox-03
Name: molecule-virtualbox-03
Status: STOPPED
Type: virtual-machine
Architecture: x86_64
Location: lxd-gitlab-runners-03
Created: 2022/01/06 18:06 CET

Log:

qemu-system-x86_64:/var/snap/lxd/common/lxd/logs/molecule-virtualbox-03/qemu.conf:307: Could not open '/proc/self/fd/3': filesystem does not support O_DIRECT

root@lxd-gitlab-runners-03:~# lxc cluster list
+------------------------------------------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
|                   NAME                   |            URL            |      ROLES      | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE  |      MESSAGE      |
+------------------------------------------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| lxd-gitlab-runners-01  | https://192.168.1.40:8443 | database-leader | x86_64       | default        |             | ONLINE | Fully operational |
|                                          |                           | database        |              |                |             |        |                   |
+------------------------------------------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| lxd-gitlab-runners-02  | https://192.168.1.41:8443 | database        | x86_64       | default        |             | ONLINE | Fully operational |
+------------------------------------------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| lxd-gitlab-runners-03 | https://192.168.1.42:8443 | database        | x86_64       | default        |             | ONLINE | Fully operational |
+------------------------------------------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+

Can you easily try linux-generic-hwe-20.04 one one of those systems?

1 Like

It’s good (almost) !


root@lxd-gitlab-runners-03:~# apt install linux-image-generic-hwe-20.04  &&  reboot

root@lxd-gitlab-runners-03:~# dpkg -l | grep ^ii.*linux-image
ii  linux-image-5.11.0-44-generic         5.11.0-44.48~20.04.2              amd64        Signed kernel image generic
ii  linux-image-5.4.0-92-generic          5.4.0-92.103                      amd64        Signed kernel image generic
ii  linux-image-generic                   5.4.0.92.96                       amd64        Generic Linux kernel image
ii  linux-image-generic-hwe-20.04         5.11.0.44.48~20.04.22             amd64        Generic Linux kernel image

root@lxd-gitlab-runners-03:~# uname -a
Linux lxd-gitlab-runners-03 5.11.0-44-generic #48~20.04.2-Ubuntu SMP Tue Dec 14 15:36:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

root@lxd-gitlab-runners-03:~# lxc list
+-----------------------------------------+---------+------------------------------+------+-----------------+-----------+------------------------------------------+
|                  NAME                   |  STATE  |             IPV4             | IPV6 |      TYPE       | SNAPSHOTS |                 LOCATION                 |
+-----------------------------------------+---------+------------------------------+------+-----------------+-----------+------------------------------------------+
| molecule-virtualbox-03                  | RUNNING |                              |      | VIRTUAL-MACHINE | 0         | lxd-gitlab-runners-03                    |
+-----------------------------------------+---------+------------------------------+------+-----------------+-----------+------------------------------------------+
| molecule-virtualbox-04                  | RUNNING |                              |      | VIRTUAL-MACHINE | 0         | lxd-gitlab-runners-03                    |
+-----------------------------------------+---------+------------------------------+------+-----------------+-----------+------------------------------------------+
| molecule-virtualbox-05                  | RUNNING | 240.42.0.143 (eth0)          |      | CONTAINER       | 0         | lxd-gitlab-runners-03                    |
+-----------------------------------------+---------+------------------------------+------+-----------------+-----------+------------------------------------------+

The VM starts fine now, but it does not automatically retrieve an IP address as a container. An idea ?

Changing the kernel fixed my problem. What does the HWE core bring ? The physical server is a Dell PowerEdge.

It’s OK to get the IP address automatically on the VM, just much slower than with a container. :slight_smile:

Thank you very much for all the help given @stgraber :clap: :ok_hand: :smiley: