I was wondering if there is a way to leverage a third party backup tool that is able to easily do scheduled incremental ZFS send/receives of LXD root filesystem datasets.
I have tested this with lxd and I had delete the destination zfs dataset for the container but then managed to do a syncoid zfs send/receive to the deleted dataset (from the source host).
LXD then successfully launches the newly copied container, but it seems it’s database knowledge of the container is lost in that you can no longer do “lxc copy” or “lxc move” It just hangs there and does nothing. I think its making a query to the database and its not returning any data.
Is anyone else managing to do this sort of replication between dispersed LXD hosts successfully ?
The lxd import command should let you re-create all needed database bits from the ZFS datasets.
You need to have all the datasets back where they belong and will need to temporarily mount them for lxd import to notice them, but with that done, the backup.yaml file will be analyzed and all the needed bits should get re-created.
It’s certainly more of a disaster recovery procedure than a day to day thing, but I’ve used it myself successfully a few times (after damaging my database due to development work).