Testing out LXD 3.19 candidate

LXD 3.19 (candidate) has been announced and it is time for testing until it gets upgraded to the stable snap channel on Wednesday, 22nd Janually 2020.

LXD 3.19 introduces two main features, routed networking and initial VM support.
Here is the full list of issues in the 3.19 milestone, https://github.com/lxc/lxd/milestone/86?closed=1

If you run snap info lxd now, you get the output below. 3.19 is in the candidate channel, and there are also a 3.19/* set of channels. On Wednesday 22nd January, LXD 3.19 from candidate will get promoted to stable. In the meantime it is possible to test 3.19-candidate now (with some risks), so that if you find any big issues, you can report them now.

  stable:         3.18        2019-12-02 (12631) 57MB -
  candidate:      3.19        2020-01-15 (12928) 67MB -
  beta:           ↑                                   
  edge:           git-14c0e2c 2020-01-16 (12957) 55MB -
  3.19/stable:    –                                   
  3.19/candidate: 3.19        2020-01-15 (12928) 67MB -
  3.19/beta:      ↑                                   
  3.19/edge:      ↑                                   
  3.18/stable:    3.18        2019-12-02 (12631) 57MB -
  3.18/candidate: 3.18        2019-12-02 (12631) 57MB -
  3.18/beta:      ↑                                   
  3.18/edge:      ↑                                   

Note that if you upgrade a LXD installation from 3.18 to 3.19, you might not be able to go back if there is a schema change. When upgrading, consider that you are moving only forward, even if snaps support the switching back and forth between versions.

We are going to switch from stable to candidate and make a mental note on Wednesday to switch back to stable as soon as 3.19 is released to stable.

$ snap switch lxd --channel candidate
"lxd" switched to the "candidate" channel

$ snap refresh
lxd (candidate) 3.19 from Canonicalâś“ refreshed

Looks good. Let’s verify!

$ lxc --version
$ lxd --version

We are on 3.19 now!


Is it possible not to go to 3.19 at all ?

I pushed my test server to 3.19 to see if it resolved an issue with containers getting an ipv4 over a linux bridge interface (Host is Tumbleweed - 5.4.10-1-default, systemd 244). Obviously not an issue with the release itself, but just documenting that the new LXD version did not make a difference.

1 Like

Yeah, you can do snap refresh lxd --channel=3.18 which will keep you stuck on 3.18 until you manually refresh to a new release or to stable to go back to latest stable.

1 Like

We’ve just published a new build to the candidate channel, including all the bugfixes since we first released 3.19.

This primarily covers VM bugs and limitations. As of right now, we haven’t yet received any bug reports of upgrade issues related to storage when going from 3.18 to 3.19.

This frankly seems a bit surprising given the switch to the new storage layer, so we’re hoping some more people can do some 3.19 tests ahead of the general rollout.

I could not create a LXD VM on Ubuntu 18.04 that had a ZFS storage pool. The error was about the lack of O_DIRECT.
It looks that at https://github.com/zfsonlinux/zfs/commit/a584ef26053065f486d46a7335bea222cb03eeea O_DIRECT has been added. Anyone knows starting from which version of Ubuntu is there support for O_DIRECT, and we can create VMs on ZFS?

Please can you provide more info about the error you are hitting?

It’s this one,

qemu-system-x86_64:/var/snap/lxd/common/lxd/logs/vm3/qemu.conf:150: file system may not support O_DIRECT
qemu-system-x86_64:/var/snap/lxd/common/lxd/logs/vm3/qemu.conf:150: Could not open '/var/snap/lxd/common/lxd/virtual-machines/vm3/config.iso': Invalid argument

On the same system it works as soon as I specify a dir storage pool. That is, the second error is gone.

I have tried with Ubuntu 19.10 and it works on a ZFS storage pool.

Oh, that’s odd, I’ve not been getting that one here, though maybe zfs 0.8 fixes it somehow.
As it’s just for the config drive, we should be able to just set a different option for it.


@stgraber will take a look

Does this happen on LVM btw?

Also please could you remove the config drive temporarily and start the VM, then on the host run mount and paste the output, I’d like to see if the ZFS volume containing the iso is mounted.

Also if you see it mounted, please can you look in the directory and check the config.iso file exists.


On 18.04.3, there is ZFS 0.7.5:

$ modinfo zfs
filename:       /lib/modules/4.15.0-74-generic/kernel/zfs/zfs.ko
version:        0.7.5-1ubuntu16.6
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
srcversion:     0F20836398248A1E604306C
depends:        spl,znvpair,zcommon,zunicode,zavl,icp
retpoline:      Y
name:           zfs
vermagic:       4.15.0-74-generic SMP mod_unload 

I’ved tested this on 18.04, albeit it not in the snap, and it works OK.

But you 18.04 box isn’t running the stock 4.15 kernel is it? At least I thought your laptop was running a 5.0 or 5.3 kernel.

True, its on 5.3.0-26-generic (HWE).

We don’t differentiate between drive types now, so they are all using the aio = "native" option (which is what I think is causing it to use O_DIRECT).

Yeah, I think so and this may be a problem for non-block devices.
So we could add logic such that if the source is a block device, we run in aio=native but if it’s a file, we use something slower but more compatible.

I’ve managed to boot off an iso file today (alpine), using the existing setup, so don’t think it is an issue always.

I remember reading something about zfs 0.8 and O_DIRECT, since you’re running our 5.3 kernel, you have ZFS 0.8 rather than the ZFS 0.7 that @simos is running.

It’s not going to hit all filesystems, but I’m not sure that we can easily test for it.
We could just detect that it’s a file on zfs and play it safe in that case.

Clearly btrfs/ext4 are fine.

I’ve got a 4.15 VM let me try it on there to re-create.