Curiosity using lxd init with existing ZFS pool

I just installed LXD 2.18 on my Ubuntu 17.10 and noticed the following:

$ zfs create ssdpool/lxd

ssdpool         54.3M  57.6G   192K  /ssdpool
ssdpool/ccache  39.6M   985M  39.6M  /ssdpool/ccache
ssdpool/lxd      192K  57.6G   192K  /ssdpool/lxd

$ lxd init

Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: ssdpool/lxd

$ zfs list

ssdpool         54.1M  57.6G   192K  /ssdpool
ssdpool/ccache  39.6M   985M  39.6M  /ssdpool/ccache

If lxd init runs into any kind of error (network related in my case), it will silently destroy ssdpool/lxd, even though it wasn’t tasked with creating it. I don’t know if this has been fixed in more recent versions, but I would consider that an oversight.

That should be a bug report for

It would help if you can replicate either for 2.0.11 (supported version on Ubuntu 16.04) or 3.0.0 (supported version on Ubuntu 18.04).

Ah, yeah, that makes sense.

I’m not sure that it’s something we can fix particularly easily though. lxd init doesn’t do everything in one shot, instead it does the same as running all the lxc storage, lxc network, lxc profile and lxc config commands one after the other.

That means that the way that it reverts things on failure is by issuing the appropriate delete or remove command for the object that it created and which no longer needs to be kept around. As deleting a storage ZFS storage pool means the dataset is also destroyed, that explains the behavior you’re seeing.

Note that when specifying an existing dataset, all that’s needed to exist is the pool it sits on, if the dataset itself doesn’t exist, LXD will create it for you.