Recover after failed /snap/bin/lxd.migrate with zfs


#1

I have ubuntu 18.04 host with 18.04 containers. They’ve all been running for a while and was upgraded from 16.04. I had an old lxd 2 something from ppa.

Tried to do a long over due migration to snap:

root@nas:~# /snap/bin/lxd.migrate
=> Connecting to source server
=> Connecting to destination server
=> Running sanity checks

=== Source server
LXD version: 3.0.3
LXD PID: 14775
Resources:
Containers: 12
Images: 1
Networks: 0
Storage pools: 1

=== Destination server
LXD version: 3.9
LXD PID: 4912
Resources:
Containers: 0
Images: 0
Networks: 0
Storage pools: 0

The migration process will shut down all your containers then move your data to the destination LXD.
Once the data is moved, the destination LXD will start and apply any needed updates.
And finally your containers will be brought back to their previous state, completing the migration.

Are you ready to proceed (yes/no) [default=no]? yes
=> Shutting down the source LXD
=> Stopping the source LXD units
=> Stopping the destination LXD unit
=> Unmounting source LXD paths
=> Unmounting destination LXD paths
=> Wiping destination LXD clean
=> Backing up the database
=> Moving the data
=> Updating the storage backends
error: Failed to update the storage pools: Failed to run: zfs set mountpoint=18.04 nas/vms/containers/dns@snapshot-pre: cannot open ‘nas/vms/containers/dns@snapshot-pre’: dataset does not exist

Seems to me it tried to mount a snapshot (with space in the name) rather than a file system.


#2

Inspired by LXD ZFS use existing zfs volumes I’ve attempted to import a container:

root@nas:~# zfs mount nas/vms/containers/dns
root@nas:~# zfs list|grep dns
nas/vms/containers/dns 2.33G 1.76T 1.39G /var/snap/lxd/common/lxd/storage-pools/default/containers/dns

root@nas:~# lxd import dns
Error: The container “dns” does not seem to exist on any storage pool
root@nas:~# lxc list


#3

Part of the problem may be that I don’t have a new storage set up?

root@nas:~# lxc storage list
±-----±------------±-------±-------±--------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
±-----±------------±-------±-------±--------+

How do I set this up without destroying my data?

nas/vms/containers/dns on /var/snap/lxd/common/lxd/storage-pools/default/containers/dns type zfs (rw,noatime,xattr,noacl)
root@nas:~# ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/dns/
total 34K
-r-------- 1 root root 6.9K Jan 29 16:24 backup.yaml
-rw-r–r-- 1 root root 1.6K Nov 30 2016 metadata.yaml
drwxr-xr-x 22 100000 100000 22 Dec 1 2016 rootfs
drwxr-xr-x 2 root root 8 Nov 30 2016 templates


#4

I think the problem was that I was still running old lxd/c binaries. May have solved it. Will report when I’m all done.


(Kees Bos) #5

When the storage is available, but not shown with ‘lxc storage list’,
I’ve done for zfs back-end something like:

zfs rename rpool/lxd-default rpool/lxd-default-old
lxc storage create default zfs source=rpool/lxd-default
zfs destroy -r rpool/lxd-default
zfs rename rpool/lxd-default-old rpool/lxd-default

Oh, and then you might want to import the containers:

  zfs list|grep rpool/lxd-default/containers/|cut -d/ -f4|awk '{print $1}'
  for i in $(zfs list|grep rpool/lxd-default/containers/|cut -d/ -f1-4|awk '{print $1}') ; do zfs mount $i ; done
  for i in $(zfs list|grep rpool/lxd-default/containers/|cut -d/ -f4|awk '{print $1}') ; do lxd import $i --force ; done