LXD + Existing ZFS Pool with containers: Disk Failure

Hi All,

Setup

Distribution: Ubuntu server
Distribution version: 18.04
kernel_version: 5.3.0-59-generic
LXD version: 4.0.2 LTS
Hardware : 1 disk for OS / 1 disk with a dedicated zfs pool

I did some research :


I’m wondering in case of a disk failure (OS) would it be possible to reinstall LXD and attach the existing pool to the new LXD ?
From my understanding it’s not and I will have to follow the “back process” .

Is it right ?

Thx !

I am a bit confused.
If you have a disk failure, your pool would be lost.
Or are you referring to a scenario in which the pool is on a different drive than the LXD configuration?

In this case (pool instact) simos guide should apply to a new installation (or re-installation) of LXD as well. because in all cases you have an empty LXD configuration.

Or are you referring to a scenario in which the pool is on a different drive than the LXD configuration?

Sorry, yes I’m referring to this scenario !

I have 2 physical drives :

  • sda : for operating system (nginx, lxd configuration etc…)
  • sdb : for zfs

If I lose “sda” and I have to reinstall system+LXD, would it be possible to attach my existing zfs pool ? (populated)

As I understand Simos still have access to his LXD configuration.

I think simos is describing exactly that scenario:

But disaster strucks, and LXD loses its database and forgets about your containers. Your data is there in the ZFS pool, but LXD has forgotten them because its configuration (database) has been lost.
In this post we see how to recover our containers when the LXD database is, for some reason, gone.

Note: To clarify all LXD configuration is saved in the database.
So if it’s lost, the configuration is also lost.

So the answer is yes.

Let’s do it now. Create a VM, install LXD and use ZFS for the storage pool. Create a few containers. Remove LXD, install LXD, reconnect to storage pool.

I will be using the deb package of LXD for brevity. Adapt to match your use-case.

$ lxc launch ubuntu:18.04 vm1 --vm --profile default --profile vm
Creating vm1
Starting vm1
$ # wait for the VM to bootup, to install LXD agent, then reboot, then be ready to connect with `lxc exec`.
$ lxc shell vm1
ubuntu@vm1:~$ sudo apt update
ubuntu@vm1:~$ sudo apt install -y zfsutils-linux
ubuntu@vm1:~$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=15GB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
ubuntu@vm1:~$ lxc launch ubuntu:18.04 mycontainer1
To start your first container, try: lxc launch ubuntu:18.04

Creating mycontainer1
Starting mycontainer1                       
ubuntu@vm1:~$ lxc launch ubuntu:20.04 mycontainer2
Creating mycontainer2
Starting mycontainer2   
ubuntu@vm1:~$ lxc list -c ns4
+--------------+---------+-----------------------+
|     NAME     |  STATE  |         IPV4          |
+--------------+---------+-----------------------+
| mycontainer1 | RUNNING | 10.182.107.158 (eth0) |
+--------------+---------+-----------------------+
| mycontainer2 | RUNNING | 10.182.107.180 (eth0) |
+--------------+---------+-----------------------+
ubuntu@vm1:~$ sudo apt remove --purge lxd -y
ubuntu@vm1:~$ sudo zfs list
...    # shows output, ZFS storage pool is there. 
ubuntu@vm1:~$ sudo apt install lxd 
ubuntu@vm1:~$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
ubuntu@vm1:~$ sudo zfs list
NAME                                                                              USED  AVAIL  REFER  MOUNTPOINT
default                                                                           885M  12.6G    24K  none
default/containers                                                               16.3M  12.6G    24K  none
default/containers/mycontainer1                                                  7.33M  12.6G   344M  /var/lib/lxd/storage-pools/default/containers/mycontainer1
default/containers/mycontainer2                                                  8.91M  12.6G   527M  /var/lib/lxd/storage-pools/default/containers/mycontainer2
default/custom                                                                     24K  12.6G    24K  none
default/deleted                                                                    24K  12.6G    24K  none
default/images                                                                    868M  12.6G    24K  none
default/images/aa623d1c9562bd954dc4c685a54eacac3e0ceacf7a05a7308de0d1aed3a8996a   525M  12.6G   525M  none
default/images/c8d9c12f5a4448d3cdd4f98f994c587272fbeba249390af0f21eeef69a05cb07   343M  12.6G   343M  none
default/snapshots                                                                  24K  12.6G    24K  none
ubuntu@vm1:~$ sudo zfs mount default/containers/mycontainer1
ubuntu@vm1:~$ sudo zfs mount default/containers/mycontainer2
ubuntu@vm1:~$ sudo lxd import mycontainer1
ubuntu@vm1:~$ sudo lxd import mycontainer2
ubuntu@vm1:~$ lxc profile list
+---------+---------+
|  NAME   | USED BY |
+---------+---------+
| default | 2       |
+---------+---------+
ubuntu@vm1:~$ lxc profile show default
config: {}
description: Default LXD profile
devices: {}
name: default
used_by:
- /1.0/containers/mycontainer1
- /1.0/containers/mycontainer2
ubuntu@vm1:~$ lxc network list
+--------+----------+---------+-------------+---------+
|  NAME  |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+--------+----------+---------+-------------+---------+
| enp5s0 | physical | NO      |             | 0       |
+--------+----------+---------+-------------+---------+
ubuntu@vm1:~$ lxc network create lxdbr0
Network lxdbr0 created
ubuntu@vm1:~$ lxc profile device add default eth0 nic name=eth0 nictype=bridged parent=enp5s0
Device eth0 added to default
ubuntu@vm1:~$ lxc profile device add default root disk path=/ pool=default
Device root added to default
ubuntu@vm1:~$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: enp5s0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/containers/mycontainer1
- /1.0/containers/mycontainer2
ubuntu@vm1:~$

At this stage, you should just be able to start the containers and they should work.

I tried this exactly as above and I could not start the containers. They were failing on a networking issue; LXD was trying to setup Openvswitch and failing. Although there was no such configuration in the first place. I am not sure what went wrong, and whether this is an issue fixed in newer versions of LXD. You can give it a go yourself, and report back.

1 Like

Thank very much Simos !
I will test with LXD latest release and report back here

1 Like