So I finally realized that I needed to remove all custom point mounts on my zfs file system and set them to “legacy” to get the lxd recover command to work. Until then, I kept getting umount errors from lxd recover.
However, I’m still getting errors and lxd recover is not finding my containers.
root@pogo:~# lxd recover
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: y
Name of the storage pool: default
Name of the storage backend (btrfs, ceph, cephfs, cephobject, dir, lvm, zfs): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): pogo1
Additional storage pool configuration property (KEY=VALUE, empty when done): zfs.pool_name=pogo1
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]: n
The recovery process will be scanning the following storage pools:
- NEW: "default" (backend="zfs", source="pogo1")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: y
Scanning for unknown volumes...
Error: Failed validation request: Failed checking volumes on pool "default": Failed parsing backup file "/var/snap/lxd/common/lxd/storage-pools/default/containers/erp1/backup.yaml": open /var/snap/lxd/common/lxd/storage-pools/default/containers/erp1/backup.yaml: no such file or directory
root@pogo:~#
So what are:
- Failed validation request? I presume this is internal b/c nothing was presented to the console.
- Failed checking volumes on pool “default”? Is this the lxd pool or the zfs pool?
- Failed parsing backup file “/var/snap/lxd/common/lxd/storage-pools/default/containers/erp1/backup.yaml”? Of course this would fail as it does not exist.
Am I not entering the correct information in the lxd recover session?
All of the filesystems on pogo1 have their mountpoint set to legacy and are unmounted. I’m assuming that lxd recover handles mounting and unmounting the zfs filesystems to find the info that it’s looking for. However, all the filesystems remain unmounted after the command executes, so either the command unmounts the filesystems or they were never successfully mounted.
To test for the existance of the backup.yaml file in a container, I tried to mount it using mount, but it failed with “canonicalization error 2”. After trying to research this and finding no working examples, I gave up and set a zfs mountpoint and zfs mounted the filesystem. I can confirm that the backup.yaml file exists in the container filesystem, so lxd recover is not yet able to find it.
What am I doing wrong here?
In case it helps, here’s my system info:
OS = Ubuntu 18.04.4 LTS
LXD version = 5.6
ZFS version info below:
root@pogo:~# dmesg | grep ZFS
[ 246.835181] ZFS: Loaded module v0.7.5-1ubuntu16.12, ZFS pool version 5000, ZFS filesystem version 5