Lxd recover after 20.04 -> 22.04

I have just upgraded one machine from Focal to Jammy and installed some faster storage. The old ZFS storage is still installed, but I can’t get lxd recover to find anything on it. Can anyone advise on how it should work, as I’m just guessing my way through the questions. I would like to recover the container and move it to the new storage.

I upgraded by clean install rather than in place upgrade.

Please can you show the options you’re putting in so far?
What was the ZFS pool name?

This LXD server currently has the following storage pools:
 - black (backend="lvm", source="black")
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: bigdisk
Name of the storage backend (ceph, btrfs, cephfs, dir, lvm, zfs): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): bigdisk/lxd
Additional storage pool configuration property (KEY=VALUE, empty when done): zfs.pool_name=bigdisk
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - EXISTING: "black" (backend="lvm", source="black")
 - NEW: "bigdisk" (backend="zfs", source="bigdisk/lxd")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
No unknown volumes found. Nothing to do.

zfs list

NAME                                   USED  AVAIL     REFER  MOUNTPOINT
bigdisk                               1.08T  16.2T     1.04T  /bigdisk
bigdisk/lxd                           40.8G  16.2T      153K  none
bigdisk/lxd/containers                40.8G  16.2T      153K  none

From that zfs list you don’t have any containers there for LXD to find and recover.

I’d trimmed the list as I was just trying to show the path, the full version is:

NAME                                   USED  AVAIL     REFER  MOUNTPOINT
bigdisk                               1.08T  16.2T     1.04T  /bigdisk
bigdisk/lxd                           40.8G  16.2T      153K  none
bigdisk/lxd/containers                40.8G  16.2T      153K  none
bigdisk/lxd/containers/container      40.8G  16.2T     40.8G  none
bigdisk/lxd/custom                     153K  16.2T      153K  none
bigdisk/lxd/deleted                    767K  16.2T      153K  none
bigdisk/lxd/deleted/containers         153K  16.2T      153K  none
bigdisk/lxd/deleted/custom             153K  16.2T      153K  none
bigdisk/lxd/deleted/images             153K  16.2T      153K  none
bigdisk/lxd/deleted/virtual-machines   153K  16.2T      153K  none
bigdisk/lxd/images                     153K  16.2T      153K  none
bigdisk/lxd/virtual-machines           153K  16.2T      153K  none

Yep, theres nothing there to recover I’m afraid.

You’d expect to see things like this:

sudo zfs list
NAME                                                                                        USED  AVAIL     REFER  MOUNTPOINT
zfs                                                                                        7.11G  37.9G       24K  legacy
zfs/containers                                                                              104K  37.9G       24K  legacy
zfs/containers/ctest                                                                       80.5K  37.9G      231M  legacy

Oh wait, is your container called “container”?

What does your lxc ls show currently?

Yes, I changed the name of the container in the zfs list to just container for privacy.

lxc ls shows:

+---------------------+---------+------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------------+---------+------+------+-----------+-----------+ | test-jammy-on-plopp | STOPPED | | | CONTAINER | 0 | +---------------------+---------+------+------+-----------+-----------+

Which is a test container that I’ve been playing with on the new storage (lvm with name black) that is shown above.

And its not the same name?

Just in the list I posted by hand. I haven’t changed the zfs volume.

lxc is only aware of the new lvm storage (Black) it doesn’t seem to know anything about the zfs storage, bigdisk. lxc storage list:

+-------+--------+--------+-------------+---------+---------+
| NAME  | DRIVER | SOURCE | DESCRIPTION | USED BY |  STATE  |
+-------+--------+--------+-------------+---------+---------+
| black | lvm    | black  |             | 2       | CREATED |
+-------+--------+--------+-------------+---------+---------+

Yes thats right because it has not recovered anything from the ZFS pool yet.

Its rather odd that we can see a container volume in zfs list on the host, and yet lxd-recover isn’t finding it, but also isn’t erroring with a mount error (suggesting it has successfully accessed the ZFS pool).

Please can you try reboot the machine and re-running lxd-recover to rule out any issues with the snap mount namespace becoming separated from the host’s mount table? Thanks

Same result after a reboot. So I don’t know what’s going on. I’ll keep the storage for a while and come back to this in a few months when I can downgrade the machines back to 5.0 LTS. I wonder if it’ll find anything then.