How do I recover a storage pool on another server?

I have the following problem:

  1. I had a working LXD server with a storage pool on ZFS (homez/lxd)
  2. server died but ZFS array is Ok.
  3. I reinstalled on another system disk (same hardware) a slightly different distro (mint → debian)
  4. I reinstalled LXD and tried initializing:
root@cinderella:/media/mcon# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: homez/lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 192.168.7.0/24                      
Invalid input: Not a usable IPv4 address "192.168.7.0/24"

What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 
Port to bind LXD to [default=8443]: 
Trust password for new clients: 
Again: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
Error: Failed to create storage pool "default": Provided ZFS pool (or dataset) isn't empty
root@cinderella:/media/mcon# 

Is there any chance to recover my old containers?
if so: how?

Side question: why CIDR 192.168.7.0/24 was rejected?

Many Thanks in Advance

Take a look at ‘lxd recover’ command.

Because you need to specify an ip for lxd host to use on the bridge (in CIDR format) rather than just the subnet. I.e. change .0 to .1

Thanks tomp,
but I’m a bit dense today :frowning:

I tried the command, but I must be missing something:

mcon@cinderella:~$ sudo zfs list
NAME                                                                                        USED  AVAIL     REFER  MOUNTPOINT
homez                                                                                      2.11T  3.05T      398G  /homez
homez/etherpad                                                                              248M  3.05T      248M  /homez/etherpad
homez/lxd                                                                                  24.4G  3.05T      140K  none
homez/lxd/containers                                                                       20.5G  3.05T      140K  none
homez/lxd/containers/lxdMosaic                                                             1.36G  3.05T     2.42G  none
homez/lxd/containers/nrf-builder                                                           4.30G  3.05T     4.71G  none
homez/lxd/containers/nrf-devel                                                             8.86G  3.05T     9.41G  none
homez/lxd/containers/seeme-builder                                                         4.39G  3.05T     4.91G  none
homez/lxd/containers/yocto-builder                                                         1.57G  3.05T     2.11G  none
homez/lxd/custom                                                                            140K  3.05T      140K  none
homez/lxd/deleted                                                                          3.30G  3.05T      140K  none
homez/lxd/deleted/containers                                                                140K  3.05T      140K  none
homez/lxd/deleted/custom                                                                    140K  3.05T      140K  none
homez/lxd/deleted/images                                                                   3.30G  3.05T      140K  none
homez/lxd/deleted/images/0db04e4645814398aca82cf3d99098dc3d317c60c9d4258c9808bd91fd67454a   570M  3.05T      570M  none
homez/lxd/deleted/images/44650bc10c092105a5695a240307f518b9f5a6f3a6c8094f340d663331191e48  1.10G  3.05T     1.10G  none
homez/lxd/deleted/images/4de6a2df17edbd2cd8a9b1dc619b4a5ac71f97755a747ca401b1c19c1870b04e   556M  3.05T      556M  none
homez/lxd/deleted/images/59b17fef4047dad12df4a71ed00e348816eec8260701fd78b90b57bede348267   562M  3.05T      562M  none
homez/lxd/deleted/images/f71fea6a1e44034abae9c63a0e625de321e144c54bf478d8aefa3035465205e5   566M  3.05T      566M  none
homez/lxd/deleted/virtual-machines                                                          140K  3.05T      140K  none
homez/lxd/images                                                                            574M  3.05T      140K  none
homez/lxd/images/d75e77c8452487d390c8ee86629f147fee110d83fb79b053bc9ad107fce7aa1c           574M  3.05T      574M  none
homez/lxd/virtual-machines                                                                  140K  3.05T      140K  none
homez/mauro                                                                                 140K  3.05T      140K  /home/mauro
homez/mcon                                                                                 1.69T  3.05T     1.69T  /home/mcon
mcon@cinderella:~$ sudo lxd recover
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (lvm, zfs, btrfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): homez/lxd
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="zfs", source="homez/lxd")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
Error: Failed validation request: Failed mounting pool "default": Cannot mount pool as "zfs.pool_name" is not specified

What am I missing? (I didn’t find documentation for the command beyond the very terse lxd recover --help output).

OK, found ( a night sleep helps).

I successfully recovered the storage, apparently:

mcon@cinderella:~$ sudo lxd recover
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (lvm, zfs, btrfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): homez/lxd
Additional storage pool configuration property (KEY=VALUE, empty when done): zfs.pool_name=homez/lxd
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="zfs", source="homez/lxd")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown volumes have been found:
 - Container "nrf-builder" on pool "default" in project "default" (includes 0 snapshots)
 - Container "nrf-devel" on pool "default" in project "default" (includes 0 snapshots)
 - Container "seeme-builder" on pool "default" in project "default" (includes 0 snapshots)
 - Container "yocto-builder" on pool "default" in project "default" (includes 0 snapshots)
 - Container "lxdMosaic" on pool "default" in project "default" (includes 0 snapshots)
You are currently missing the following:
 - Network "lxdbr0" in project "default"
Please create those missing entries and then hit ENTER: 
The following unknown volumes have been found:
 - Container "lxdMosaic" on pool "default" in project "default" (includes 0 snapshots)
 - Container "nrf-builder" on pool "default" in project "default" (includes 0 snapshots)
 - Container "nrf-devel" on pool "default" in project "default" (includes 0 snapshots)
 - Container "seeme-builder" on pool "default" in project "default" (includes 0 snapshots)
 - Container "yocto-builder" on pool "default" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
mcon@cinderella:~$ lxc list
+---------------+---------+------+------+-----------+-----------+
|     NAME      |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+---------------+---------+------+------+-----------+-----------+
| lxdMosaic     | STOPPED |      |      | CONTAINER | 0         |
+---------------+---------+------+------+-----------+-----------+
| nrf-builder   | STOPPED |      |      | CONTAINER | 0         |
+---------------+---------+------+------+-----------+-----------+
| nrf-devel     | STOPPED |      |      | CONTAINER | 0         |
+---------------+---------+------+------+-----------+-----------+
| seeme-builder | STOPPED |      |      | CONTAINER | 0         |
+---------------+---------+------+------+-----------+-----------+
| yocto-builder | STOPPED |      |      | CONTAINER | 0         |
+---------------+---------+------+------+-----------+-----------+

… but something is still amiss:

mcon@cinderella:~$ lxc delete lxdMosaic
mcon@cinderella:~$ lxc launch ubuntu: lxdMosaic
Creating lxdMosaic
Error: Failed instance creation: Failed creating instance record: Failed initialising instance: Invalid devices: Failed detecting root disk device: No root device could be found

I seem to have found root cause:

mcon@cinderella:~$ lxd init --dump
config: {}
networks:
- config:
    ipv4.address: 10.194.203.1/24
    ipv4.nat: "true"
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: bridge
  project: default
storage_pools:
- config:
    source: homez/lxd
    volatile.initial_source: homez/lxd
    zfs.pool_name: homez/lxd
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: Default LXD profile
  devices: {}
  name: default
projects:
- config:
    features.images: "true"
    features.networks: "true"
    features.profiles: "true"
    features.storage.volumes: "true"
  description: Default LXD project
  name: default

Apparently I am missing some devices: in my default profile; I expected something like:

...
profiles:
- config: {}
  description: Default LXD profile
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
...

Should I retry initialization?
I wouldn’t like to mess further with a half-crippled system without proper knowledge :frowning:
Probably editing default profile should be enough, but I’m unsure :fearful:.

SUCCESS!!
lxc profile edit default did the trick.
Many thanks to @tomp: I will mark his answer as solution (and leave all this for future reference).

1 Like