This will create a new LXD image from your container and export it as a tarball in your current directory. You can then ship that tarball to your target host and do:
But that’s quite a bit of indirection and you’ll then have to cleanup those temporary images and tarballs, … I’d strongly recommend just having the second LXD download the container from the first one.
Ok…let me explain better what I want to do…Suppose I have an lxd server and a NAS.
I want to backup all containers on the NAS. If the lxd server will die at some point I’d like to restore the containers…that’s it.
Now while I’m trying to install the container from the created tarball I get:
$ lxc image import $TARBALL.tar.gz --alias foo
error: Image with same fingerprint already exists
Suppose I have a container, how to clone it and have it running with another name?
The “lxc image import” failing is because you’re attempting to import the generated image on the same server that produced it. You should just remove the image from the image store (with “lxc image delete”) after you’ve exported it as a tarball. That’ll solve that.
To clone a container, you’d just do “lxc copy SOURCE DESTINATION”, but that’s on a single local LXD. In your case, it looks like you’re trying to test your backup mechanism.
Say you have a container called “blah”. For backup as an image tarball, you’d do:
lxc snapshot blah backup
lxc publish blah/backup --alias blah-backup
lxc image export blah-backup .
lxc image delete blah-backup
Which will get you a tarball in your current directory.
To restore and create a container from it, you can then do:
lxc image import TARBALL-NAME --alias blah-backup
lxc launch blah-backup some-container-name
lxc image delete blah-backup
This is still pretty indirect and abusing the image mechanism to use it as a backup mechanism though One alternative you could use is to just generate a tarball of /var/lib/lxd/containers/NAME and dump that on your NAS.
Restoring that is a bit harder though. You’ll need to create a /var/lib/lxd/storage-pools/POOL-NAME/containers/NAME path matching the name of the backed up container. Then if the storage pool is zfs or btrfs or lvm, you’ll need to create the applicable dataset, subvolume or lv and mount it on /var/lib/lxd/storage-pools/POOL-NAME/containers/NAME and then unpack your backup tarball onto it. Lastly, you can call “lxd import NAME” to have LXD re-import the container in the database.
I think we can do something quite a bit simpler by directly allowing the export/import of containers as tarballs but it may take a while until we get to that: https://github.com/lxc/lxd/issues/3730
I have deleted all the snapshot I took under my container (I always keep 3 fresh snapshots for backup purposes)
I have tried again to lxc copy the container. No luck. It seems that on the remote side zfs receive doesnt get any byte after some time. On the original lxd server I have tried to strace the process and it is locket in a futex syscall. It hangs forever.
I have tried to publish the image and it eventually succeded. So I blamed the snapshots for the former failure. Note that I have checked for enough space in the mouted image filesystem as I moved it to the lxd zfs pool which is plenty of space.
I have second “master” server and I have tried to copy a container to another “slave” lxd server. Same result. Copy always hangs in the same way. Lxd versions and configuration are exactly the same.
I will try to make more tries on a test plant and see if I can give you more details or clues.
Future development note
By the way I using containers quite heavily in production systems and I have no particular issues except backup.
When a container starts to be > 100GB (mainly data) it is impossibile to copy it over from scratch every day to a backup server. Also publishing an image is not a good way and it takes huge amount of space and system resources. Now I am using zfs incremental backups but of course to restore a container would take some work.
So it would be great to have a native system to backup containers “incrementally” to a slave server. It is mostly the reason why all my collegues would use vmware instead of LXD.
Sorry for being verbose and sorry for being note very helpful but my knowledge is not enough to contribute to development !
Works perfectly.
I would add to this thread and suggest everyone to use by default no compression, especially if using ZFS compression. The images are fast. lxc publish xenial/snap0 --compression none --alias export-xenial-snap0
Your “blah” recipe worked PERFECTLY for me!
Thank you very much.
BTW, deployed latest LXD by means of snap on my desktop
Debian-9.6 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
Still making my way through this exceptional app/tool/system.
Hi tws
Just wanted to know if your case implied two different nodes.
I see you tried the “blah” recipe and it worked fine, but I did not understand if you were working with one or more nodes. Maybe because of my poor english.
Hi all,
I’m getting this error when running the command lxc publish:
lxc publish postfix-lxc/postfix_backup-test --alias mailserver_backup
Error: Image sync between nodes: Image sync between nodes: failed to begin transaction: not an error
How can I fix this error please ? I runnin lxd clustering with ceph backend.
I’m wondering, if it is possible to use a copy the snapshot folder (eg. /var/lib/lxd/storage-pools/default/snapshots/my-container/my-snapshot) to restore container?
The benefit of such approach arises in case of using a centralized backup solution of some kind: container data is not duplicated (exported to image tarball), but directly archived from snapshot to the appropriate location.
Like this:
take snapshot
backup snapshot folder - something like Borg would do this in an instant
It depends on the storage backend but for btrfs/dir, yes, that should be possible.
To restore, you’d need to either move it back on top of the container’s storage directly.
On a system that doesn’t have the container at all, you could then use lxd import following the disaster recovery documentation.