Yet another "How to backup LXD powered containers"-thread!

Threads with subjects like this one or alike have been posted a lot.
And although I can’t confidently say I read all of them, I think was digging with quite some effort.

I’m running LXD/LXC with LVM volumes as backends and now want those containers to be backed up to some remote storage.

  1. That remote storage is not running LXD (but can run whatever else is best/easiest (SFTP, WebDAV, FTP, SMB/CIFS, …))
  2. I do not want to locally create images from containers (or their snapshots) before actually exporting/copying them over, as I simply don’t have the space
  3. I’d like to pipe the resulting container-export through e.g. gpg before sending it to remote storage

Quite frankly, I didn’t expect that being so difficult.

First of all, I figured lxc-export would be my friend. However I can’t tell it to export to an arbitrary path (e.g. where my NAS is mounted) without eating up all my disk space first, nor to stdout (which would be preferred for piping through e.g. GPG before piping it to my NAS mountpoint). Ticket:

lxc-image export:
Creates an image onto my root-filesystem (because the path for local images is part of my root-filesystem). Apparently I can’t pass arbitrary paths, as of: , but it will only and always be created locally (first) and, again, eat up all my disk space.

Can only export images which I didn’t manage to create outside of my local rootfs (see above).

I can’t use lxc-copy easily, as the remote backup storage doesn’t run LXD.
However I installed nginx-ssl and tried to implement the necessary REST API the client (lxc-copy) expects.
However that feels like it shouldn’t be necessary.
Maybe there’s some scripts or even just an NGINX-config already available somewhere, which let’s lxc-copy communicate and transfer images? (That wouldn’t solve my end-to-end-encryption requirement though, however one step at a time…)

Am I missing something?

1 Like