3.19 update - LXD daemon won't startup - DIR backend storage init problem

The LXD daemon is not starting up, dir backend storage init problem:
lxd.daemon[571101]: t=2020-01-23T19:25:21+0100 lvl=eror msg=“Failed to start the daemon: Failed initializing storage pool “tmp”: Failed to mount ‘/lxdtmp’ on ‘/var/snap/lxd/common/lxd/storage-pools/tmp’: no such file or directory”

But both paths do exist.

I can confirm similar behavior as of this morning.

It appears snap updated LXD to 3.19. After that, LXD was unable to start, and we have the following in our logs:

2020-01-23T18:34:57.652885+00:00 vmhost3 lxd.daemon[8680]: t=2020-01-23T18:34:57+0000 lvl=eror msg="Failed to start the daemon: Failed initializing storage pool \"default\": Failed to mount '/vmhost_zfs/lxc_zfs' on '/var/snap/lxd/common/lxd/storage-pools/default': no such file or directory"
2020-01-23T18:34:57.820748+00:00 vmhost3 lxd.daemon[8680]: Error: Failed initializing storage pool "default": Failed to mount '/vmhost_zfs/lxc_zfs' on '/var/snap/lxd/common/lxd/storage-pools/default': no such file or directory

Note: even though “zfs” is in the pathname, we are using the directory driver and not the ZFS driver. The directory is just coincidentally part of a ZFS pool.

Please let me know if I can provide any other information.

I believe we understand the problem and are preparing a fix. It will be pushed through as soon as tested.

3 Likes

This patch should fix it:

1 Like

Awesome! Thank you all for the quick response to this.

I’m assuming this will make its way to the stable channel soon?

Yes it should be done shortly. Thanks

2 Likes

The fix has been pushed to candidate now, it will take about an hour for this to build.
Once that’s done, automatic QA is run, when I see that has passed I’ll immediately push to stable.

So I’d expect this to be all done within the next 1.5 to 2h, before I got to sleep for sure.

2 Likes

Thanks so much, it is really appreciated.

Can confirm that the candidate (13085) does fix the issue.

Now in stable.

Nice, thanks!
Things are to normal again.

Can confirm this resolves our issue. Thank you!