Backup Clarifications ZFS Backend and Dir

I understand there are several posts on backing up LXD; however, I must admit, I am confused how certain backup techniques work. Some background, I came from VM Workstation and qemu/kvm where all I backed up were the individual vm folders so that if my drive failed, I just reinstalled OS and software and import the vm folders.

Regardless, I am uncertain on my part on what is actually necessary to backup of LXD.

Looking through the documentation of Backing up a LXD Server . It mentions for a full backup:

Full backup
A full backup would include the entirety of /var/lib/lxd or /var/snap/lxd/common/lxd for snap users.

You will also need to appropriately backup any external storage that you made LXD use, this can be LVM volume groups, ZFS zpools or any other resource which isn’t directly self-contained to LXD.

Restoring involves stopping LXD on the target server, wiping the lxd directory, restoring the backup and any external dependency it requires.

Then start LXD again and check that everything works fine.

If I were to use say ZFS partition (as opposed to loop-based) as a backend, for example, am I correct to say it is NOT enough to backup that folder and on a new computer rsync the folder (/var/snap/lxd/common/lxd), install LXD, and LXD should work just fine? I imagine this is true even if ZFS LXD partition is on a separate drive? If I understand correctly, backing up documented folder is not possible for zfs backend because of namespaces (one has to mount)?

Would copying of LXD snap folder only work with Dir backend if I understand @stgraber correctly in Backing up a LXD server ?

For those that have more experience, would it be preferable then to use Dir backend and backup snap folder (assuming that is all that is necessary to backup)?

Or should I figure out a way to export individually all my containers as tarballs and also include I think

  • lxd init --dump (for new install)
  • lxd profiles

I think in my case that should be all things I need to backup or am I missing something major?

Side Questions:
When using lxc exports, that pauses / shut down container to do export tarball backup? Does that differ between Dir and zfs backend?

Does the speed of exporting differ between dir and zfs backends?

What does lxc export tarball actually include? Would it include previous snapshots of the container? Are they completely self contained ( not using ‘optimized’ flag as that would require the new server to use the same backend)?

In your ZFS case, you’d need two things for a full backup

  • Backup of /var/snap/lxd/common/lxd
  • Backup of your zpool. This can be done by doing a single zfs send -R of the entire thing into a file but that’s going to be massive and annoying to deal with. Those in such situation often instead rely on ZFS replication tooling external to LXD to send daily replicas of their zpool to another system.

Using /var/snap/lxd/common/lxd alone will work perfectly with the dir storage backend when the source of the pool wasn’t set to an existing path outside of /var/snap/lxd/common/lxd/. For all other storage backends, you’ll need more data to recover things in an identical way.

For example, btrfs may look a lot like dir as far as how it behaves, but making a giant tarball of /var/snap/lxd/common/lxd will cause a LOT of data duplication and a complete loss of btrfs metadata. Restoring that tarball will restore all the data, but LXD will then get quite confused as all the btrfs subvolumes will have disappeared as part of the restore.

In most cases, the most efficient way to handle backups is to have a second system running LXD with a similar storage setup. You can then use lxc copy --refresh combined with some daily snapshots to transfer the delta every day to the target server. This will also use backend-specific optimized transfer methods when available (btrfs/zfs send/receive for example).

Otherwise, yes, you can use lxc export or if intending on restoring on the same backend lxc export --optimized and store the resulting tarball on some external or network media, but this process is quite expensive both in disk usage and CPU usage.

1 Like