3rd party backup tools with LXD for ZFS - e.g. syncoid


(Jon Clayton) #1

Hi

I was wondering if there is a way to leverage a third party backup tool that is able to easily do scheduled incremental ZFS send/receives of LXD root filesystem datasets.

I have tested this with lxd and I had delete the destination zfs dataset for the container but then managed to do a syncoid zfs send/receive to the deleted dataset (from the source host).

LXD then successfully launches the newly copied container, but it seems it’s database knowledge of the container is lost in that you can no longer do “lxc copy” or “lxc move” It just hangs there and does nothing. I think its making a query to the database and its not returning any data.

Is anyone else managing to do this sort of replication between dispersed LXD hosts successfully ?

Cheers,

Jon.


(Stéphane Graber) #2

The lxd import command should let you re-create all needed database bits from the ZFS datasets.
You need to have all the datasets back where they belong and will need to temporarily mount them for lxd import to notice them, but with that done, the backup.yaml file will be analyzed and all the needed bits should get re-created.

It’s certainly more of a disaster recovery procedure than a day to day thing, but I’ve used it myself successfully a few times (after damaging my database due to development work).


(Jon Clayton) #3

Ok will give that a try thanks :slight_smile:


(Jon Clayton) #4

Did the LXD import and it seem to go without a hitch, however copying it back still not working

lxd import xeoma --force

root@bramdc-src-01:/var/snap/lxd/common/lxd/storage-pools/default/containers/xeoma# lxc copy xeoma3 hetz: --stateless 
Error: not found

  jon@bramdc-src-01:~$ lxc monitor
metadata:
  context: {}
  level: dbug
  message: 'New event listener: 446b95fd-8dd1-4ebd-90c1-e4d6a08524ed'
timestamp: "2018-04-11T22:20:32.422158508Z"
type: logging


metadata:
  context:
ip: '@'
method: GET
url: /1.0
  level: dbug
  message: handling
timestamp: "2018-04-11T22:20:35.032112965Z"
type: logging


metadata:
  context:
ip: '@'
method: GET
url: /1.0/containers/xeoma3
  level: dbug
  message: handling
timestamp: "2018-04-11T22:20:35.384470702Z"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Database error: &errors.errorString{s:"No such object"}'
timestamp: "2018-04-11T22:20:35.385047439Z"
type: logging

(Jon Clayton) #5

Its working now sorry i just need to create a new snapshot and send that :slight_smile: