Backing up containers

Hi,

is it enough to just copy this file to another place? that will backup all the containers created?

/var/snap/lxd/common/lxd/disks/default.img

thanks,

No, it is not really enough.

The best solution I found out is :

At the host-level do a

  • “zfs list” to find your containers storage
  • “zfs mount” for all your default/containers/CONTAINERNAMEs (default: this zfs-name depends on your lxd-init where you have been asked to name your zfs-storage)
  • now backup the entire path /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/default/* (I’m doing that by BackupPC: Open Source Backup to disk and scripts which are executed before backuppc starts working. you can easly modify all that within backuppc)

After this you are on the safe side.
But this is only just one opportunity.

ps.: If you have to copy things to a container its sometimes easier to copy your files &| dirs into those path directly instead using »lxd file pull | push -r «. the only thing you must do afterwards is to
chown 1000000.1000000 -R your_copied_files
thats all folks.

The best way to backup containers, is to copy the container to another LXD host.

lxc copy CTNAME CTNAME_backup --target host2

hi,

thanks to Sandious and Tom,

my need is quite simple, I have two disks forming a zfs pool, all my data are stored there.
and I have another hard disk with ubuntu 18.04 server, I set up multiple containers namely: samba, database1 and database2, the default.img is in this OS hard disk.

I attached various zfs datasets from the zfs pool to containers

I don’t worry about data for now as it is a zfs mirrored pool. what left is the OS drive which I can easily re-install in case of problem, after that I need to install again those containers, this might be tedious, so was thinking, just back the default.img, and zfs import later, will this work?

Thanks

Please check this reply from Stéphane.

You can use tarballs to export the container. It’s recommended to use a second LXD node or VM.

Hi,
Thanks for the info, after all the installations, I did two things:

  1. clonezilla the OS drive to another hard disk, so in case of hard disk failure I can just mount that drive
  2. I did export all the containers to an external drive (publish, export)

above should be enough for now, but was thinking, maybe what i need is only:

  1. re-install the os drive
  2. zfs import the default.img
    this might be easier

Note that this answer from @stgraber although correct (of course :-)) is not complete anymore since @stgraber and his team added a direct export to tarball without the publishing step.

lxc export </path/to/destination.tar.gz>

It’s far slower than copying to another Lxd server anyway but it works even with a target not supporting LXD. I use it with a NAS and sshfs.

1 Like

In terms of disaster-recovery management you have to develop your own strategy (see also Backup rotation scheme - Wikipedia). Before you can roll out you must test it several times. Have you tested backing up default.img copy and afterwards restore? Does it work ? Does it really work as you expected?

IMHO it makes no sense to backup by making full snapshots and copy them to a remote host if you have a lot of containers as you described. Better solution is to only backing up modified data (GFS principle) instead of always and anytime the full monthy. Because you have to copy it to the backup host ( think of bandwidth, performance, bottlenecks a.s.o.). But it’s up to you.

1 Like

The question is how often, how many, how big each tar-ball is and how to copy a tar-ball to a remote host(s)? By rsync? By rsync over ssh? By zfs send receive? The matter of fact is you have to copy.
Secondly if one have a lot of containers she has to handle all that manually? In short what are the costs?

Depending on analysis situs one have to clear how to handle performance vs security. And one have to test any strategy before she put it in place. Presumable the toolbox in your garage contains more than just one screwdriver, doesn’t it?
Eventually all thoughts lead to one point: is it enough to follow suggestions blindly in order to avoid testing a solution yourself?

Hi Sandious,

Thanks for the suggestions, i need to test it to see if it works.

Testing is always important. Like I said, A tarball is not the ultimate solution.
The best and easiest solution is a remote copy of the container.

Dissagree.
Are you really sure?
Since the first day on I had myself a lot of troubles to accomplish that. Because your suggestions lean on a sober straight (not protected and barrier free) infrastructure to the internet. An other issue is, it works only one way.
What does it mean?
Trust LXD what ever that provides instead of thinking what they do in fact?
Ok, the other way around.
Try to copy a container from one site to another (via Internet) as you suggested . How long does it take in average, ish? Is it save? Encrypted? And now extrapolate it for more than one container to more than just one site.
My questions are not focused on a POV for industrial usage. Just think about all that guys trying to survive by their services on their own one-man-show daily business.
The main question ist. What for ist LXD ?

A real clever solution would be if LXD were able to copy ONLY container-distinct needs. Not the full monty included all that kind of OVERHEAD.

just tried lxc copy between 2 low end Internet servers (with 100Mbits/s each), got about 5 Mbytes/sec, that is about 40Mbits/s.
From one of these 2 internet server to another low end server but behind a customer ADSL connection (30Mbits/s - 10Mbits/s), I got about 8MBits/s will lxc copy. Note that you can only do the copy from the external internet server toward the internal local server, no upload is possible without adding local firewall rules since this internal server is masqueraded, the internet server can’t connect in the reverse direction.

From the same low end server behind ADSL toward a NAS in the internal network at 1 Gbits/second using lxc export, same image, got about 20Mbits/s if container stopped, 10Mbits/s if container live. The speed penalty of lxc export is not slight.

It’s not possible to do live copy between 2 LXD server without having a thingy called CRIU that I have ever investigated, so ATM export can work for me with live containers (but at a speed price) but copy only for stopped containers.
LXD copy works only with https AFAIK, so there is no spying to fear.

However, I have concerns about security myself.
I’d not recommend a straight backup on the internet with lxd copy (unless setting up a vpn).
On an internal network it could be accepted IMO, especially in development mode. On the internet I have only used it with special firewall rules to restrict to specific IPs, and even then only for the time needed to migrate containers, and after the migration I remove the passwords and tear down the firewall rules.
Maybe I’m paranoid, but I don’t think that lxd is as tested and checked than ssh from a security point of view. And lxd server links are as powerful than ssh, you can do lxc exec with a remote lxd server (private connection of course, with password). It’s a bit scary when you do realize it.

1 Like

Totally agree!

What about TLS 3.x implementations? And what does it mean for all those in the EU and China? Man in the Middle! You guys in the US you had it since the 1970th by AT&T and others (hint: Phone-Jack, FBI). And? Lucky? Don’t matter?

If I were a smart guy developing things like LXD or so I would try to bring up my work to a higher level to a more broader perspective into future. Because it is and was so MUCH work to develop and roll out.

Wondering

What is the sophisticated ROI of LXD ? Just Fun ?

Do you ever try borg?
I use now with lxc container to save the snapshot of lvm.
I plan to switch to lxd and i’m searching for usefull information also about strategy for backup.
Is it still possible to save the snapshot of the LVM on which reside the container?
The advantage of borg is the deduplication of the repo