Backup the container and install it on another server

let’s say I had the target LXD server on a (local) IP address of 172.20.14.123 would that be then

lxc remote add <remote-alias> 172.20.14.123

?

Yes, that’s correct.

You’ll need to make sure to set a couple of configuration keys on that remote server so that it listens on the network though:

lxc config set core.https_address 172.20.14.123
lxc config set core.trust_password SOME-PASSWORD

The password will be prompted when adding the remote.

Hello,

I tried to publish an image to export a container to a new server as lxc copy always hangs (maybe slow network) but I always get:

Any idea?

root@hi2:~# lxc publish pol1 --alias pol1_image
error: exit status 1

INFO

root@hi2:~# lxc --version
2.0.10

root@hi2:~# lxc info|more
config:
  core.https_address: '[::]:8443'
  core.trust_password: true
  storage.zfs_pool_name: lxd
api_extensions:
- id_map
api_status: stable
api_version: "1.0"
auth: trusted
public: false
environment:
  addresses:
  - XX.XX.XX.XX:8443
   architectures:
  - x86_64
  - i686
  certificate: |
-----BEGIN CERTIFICATE-----
    MIIFSjCCAzKgAwIBAgIRAOIeoZNQwp5eqya7L/wuyU8wDQYJKoZIhvcNAQELBQAw
.
.
a6C6TexVhNikGz7omX0=
    -----END CERTIFICATE-----
  certificate_fingerprint: dd171ea983483e614f3f78418e5f4e90aa404b241c59b09da29ff422e7bf7482
  driver: lxc
  driver_version: 2.0.8
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.4.0-87-generic
  server: lxd
  server_pid: 8279
  server_version: 2.0.10
  storage: zfs
  storage_version: "5"

Anything interesting in /var/log/lxd/lxd.log?

Dear Stephane,

unluckily nothing interesting.
However:

  1. I have deleted all the snapshot I took under my container (I always keep 3 fresh snapshots for backup purposes)
  2. I have tried again to lxc copy the container. No luck. It seems that on the remote side zfs receive doesnt get any byte after some time. On the original lxd server I have tried to strace the process and it is locket in a futex syscall. It hangs forever.
  3. I have tried to publish the image and it eventually succeded. So I blamed the snapshots for the former failure. Note that I have checked for enough space in the mouted image filesystem as I moved it to the lxd zfs pool which is plenty of space.
  4. I have second “master” server and I have tried to copy a container to another “slave” lxd server. Same result. Copy always hangs in the same way. Lxd versions and configuration are exactly the same.
  5. I will try to make more tries on a test plant and see if I can give you more details or clues.

Future development note
By the way I using containers quite heavily in production systems and I have no particular issues except backup.
When a container starts to be > 100GB (mainly data) it is impossibile to copy it over from scratch every day to a backup server. Also publishing an image is not a good way and it takes huge amount of space and system resources. Now I am using zfs incremental backups but of course to restore a container would take some work.
So it would be great to have a native system to backup containers “incrementally” to a slave server. It is mostly the reason why all my collegues would use vmware instead of LXD.
Sorry for being verbose and sorry for being note very helpful but my knowledge is not enough to contribute to development !

Thanks a lot for your help

Andrea.

Not speaking as an expert user here but possibly Backuppc might be able to help you.

I will give a look, thanks.

However how I have “complained” :slight_smile: I really hope lxd will have a native incremental backup tool on day !

It would be really nice to have a native backup option that would adress all Problems I have read about:

  • Incremental Backups for large Containers (rclone compatibility)
  • Non corrupt Databases in the Backups

Thanks a lot!

Works perfectly.
I would add to this thread and suggest everyone to use by default no compression, especially if using ZFS compression. The images are fast.
lxc publish xenial/snap0 --compression none --alias export-xenial-snap0

1 Like

Acknowledging that the maintainer has said this is an abuse of the provided imaging mechanisms, I mad a simple script to backup all of my containers:

#!/usr/bin/env bash
set -ex

BACKUP_DIR=/path/to/where/backups/should/live
HOSTS=($(lxc list -c n --format csv))

for HOST in "${HOSTS[@]}"
do
    BACKUP_NAME=${HOST}-$(date +"%Y-%m-%d")

    lxc snapshot ${HOST} auto-backup
    lxc publish ${HOST}/auto-backup --alias ${BACKUP_NAME}
    lxc image export ${BACKUP_NAME} ${BACKUP_DIR}/${BACKUP_NAME}.tar.gz
    lxc image delete ${BACKUP_NAME}
    lxc delete ${HOST}/auto-backup
done
5 Likes

Greetings Stéphane et al.

Your “blah” recipe worked PERFECTLY for me!
Thank you very much.

BTW, deployed latest LXD by means of snap on my desktop
Debian-9.6 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
Still making my way through this exceptional app/tool/system.

Best regards.
Tom

Hi tws
Just wanted to know if your case implied two different nodes.

I see you tried the “blah” recipe and it worked fine, but I did not understand if you were working with one or more nodes. Maybe because of my poor english.

Anyway, thanks in advance for your answer !!

Cheers

Note that if both LXD hosts are connected to the Internet, you can now also perform lxc move or lxc copy.

1 Like

Hi all,
I’m getting this error when running the command lxc publish:
lxc publish postfix-lxc/postfix_backup-test --alias mailserver_backup
Error: Image sync between nodes: Image sync between nodes: failed to begin transaction: not an error
How can I fix this error please ? I runnin lxd clustering with ceph backend.

Hi all!

I’m wondering, if it is possible to use a copy the snapshot folder (eg. /var/lib/lxd/storage-pools/default/snapshots/my-container/my-snapshot) to restore container?

The benefit of such approach arises in case of using a centralized backup solution of some kind: container data is not duplicated (exported to image tarball), but directly archived from snapshot to the appropriate location.

Like this:

  • take snapshot
  • backup snapshot folder - something like Borg would do this in an instant
  • remove snapshot

It depends on the storage backend but for btrfs/dir, yes, that should be possible.

To restore, you’d need to either move it back on top of the container’s storage directly.
On a system that doesn’t have the container at all, you could then use lxd import following the disaster recovery documentation.

Thank you!

having doubt about this cmd: lxc publish blah/backup --alias blah-backup >> will it publish containers publicly means for anybody?

You’d need --public for the resulting image to be marked as publicly available.

This script for all container but if I need some selected container daily backup how can I do by it script?