Moving a block device storage pool to a new server

I have a desktop with two disks. I’ve recently reinstalled PopOS 21.04 on nvme0n1. In my previous
installation, the unmounted disk you see here - nvme1n1 - was entirely dedicated to my lxd server
as a btrfs storage pool. The data should still be there.

nvme1n1       259:0    0 232.9G  0 disk  
nvme0n1       259:1    0 465.8G  0 disk  
├─nvme0n1p1   259:2    0   498M  0 part  /boot/efi
├─nvme0n1p2   259:3    0     4G  0 part  /recovery
├─nvme0n1p3   259:4    0 457.3G  0 part  /
└─nvme0n1p4   259:5    0     4G  0 part  
  └─cryptswap 253:0    0     4G  0 crypt [SWAP]

I’ve installed the “latest” snap stream and initialized LXD. Everything but the storage pool.

0 % lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: no    <---NO!
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

I’d like to give the whole block device to LXD again, but not wipe the drive.

How do I do it?

You should be able to use the new lxd import command. Specifying the partition as the source of the pool you want to recover.

It worked! Thank you! There are a couple of paper cuts in the UX.

  • unclear what to type in response to Source of the storage pool
  • I had no storage pools configured on purpose, so I needed to make the dir that lxd wanted to mount my drive to

But it worked.

As an aside, this seems like a pretty normal use case: taking a block device and moving it to a new server. The help text for lxd recover makes it sound more scary. :slight_smile:


My attempts, for your reference.

This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (lvm, zfs, ceph, btrfs, cephfs, dir): btrfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): block device
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="btrfs", source="block device")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
Error: Failed validation request: Failed mounting pool "default": Failed to mount "/dev/disk/by-uuid/block device" on "/var/snap/lxd/common/lxd/storage-pools/default" using "btrfs": no such file or directory

second try

1 % lxd recover
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (lvm, zfs, ceph, btrfs, cephfs, dir): btrfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /dev/nvme1n1
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="btrfs", source="/dev/nvme1n1")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
Error: Failed validation request: Failed mounting pool "default": Failed to mount "/dev/nvme1n1" on "/var/snap/lxd/common/lxd/storage-pools/default" using "btrfs": no such file or directory

after doing sudo mkdir /var/snap/lxd/common/storage-pools/default`

0 % lxd recover
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (lvm, zfs, ceph, btrfs, cephfs, dir): btrfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /dev/nvme1n1
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="btrfs", source="/dev/nvme1n1")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown volumes have been found:
 - Container "crons" on pool "default" in project "default" (includes 0 snapshots)
 - Container "ubuntu-2004-ci" on pool "default" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
1 Like

Thanks for the feedback, I’m glad it worked.

To address your points:

  • unclear what to type in response to Source of the storage pool - yes I can understand that, it differs for each pool type. I have mentioned to @ru-fu that we should probably break out our Storage Pool docs into tables for each pool driver type, then we could expand more in each driver’s section as to what the source property represents. In many cases it can be one of multiple things depending on the pool driver type. For BTRFS for example it can be an existing directory or a partition to a block device containing the BTRFS filesystem.
  • I had no storage pools configured on purpose, so I needed to make the dir that lxd wanted to mount my drive to - hrm thats interesting, you certainly shouldn’t have had to create the /var/snap/lxd/common/storage-pools/default directory. I’ll look into that as it may well be an issue in the btrfs storage pool driver that isn’t creating the mount point when we try and mount the pool.

Which parts sound scary out of interest? The aim of this tool was for disaster recovery where the LXD database has been accidentally removed. If you’ve reinstalled your system or moved your LXD pool device to a new system without taking a backup of the LXD database and restoring it, then that is effectively the same issue (no database) albeit it not accidentally.

I’ve fixed the issue with LXD not creating the storage pool mount path:

1 Like

I guess it’s the term “disaster recovery” that’s scary. Moving a disk is not necessarily a disaster!

:smiley:

The reason we consider it disaster recovery is that the database hasn’t been moved with it (presumed accidentally deleted). Not all info is stored in the backup.yaml, so if its not an unexpected transfer, then it is certainly preferable to move the database as well.

If you’re just moving a single instance between machines then certainly lxc export and then lxc import or lxc copy using a remote to the new system is preferred too.

1 Like