Backup/Copy Strateygy

I am a relatively new admin of an LXD environment where I built several Drupal/MariaDB systems. Two simple ubuntu containers: a basic web server and database .

I’m trying to protect myself from disaster or human error by nightly (sure, sure, more likely weekly if this is manual) backups.

I have two servers that are nearly identical and only host the LXD containers. Prime = hp-elitedesk-20 while the Backup is hp-elitedesk-23

My caveman style hammer method now is this:

On the prime machine (20):

lxc project switch bucks
lxc ls
lxc stop mariadb-bc
lxc stop drupal-bc
lxc snapshot mariadb-bc   Bucks_2023_03_02
lxc snapshot drupal-bc      Bucks_2023_03_02
lxc start mariadb-bc
lxc start drupal-bc
lxc ls
# lxc exec mariadb-bc -- /bin/bash
# lxc exec drupal-bc -- /bin/bash
# 20 to → 23
lxc copy   mariadb-bc/Bucks_2023_03_02   --target-project bucks     hp-elitedesk-23:mariadb-bc
lxc copy   drupal-bc/Bucks_2023_03_02      --target-project bucks     hp-elitedesk-23:drupal-bc

Now I have noticed that if the containers already exist on the backup (23) then the copy fails so I am doing a pre-step on the backup where I

lxc project switch bucks
lxc stop mariadb-bc
lxc stop drupal-bc
lxc delete mariadb-bc
lxc delete drupal-bc

So my first question, is this a particularly bad method to avoid? Is there a better method?

Second, is there a way for me to copy the prime snapshot to the backup machine without deleting it from the backup first? Maybe copy the snapshot on the prime to a snapshot on the backup? Does this make sense? I might not be using the best language to convey the question.

Thanks for any help!

Hi @HeneryH,

the --refresh flag is what you may are for (even for the first copy).

      --refresh              Perform an incremental copy

Given you use zfs as storage backend and that you also use snapshots on the source (I setup autosnapshot profiles), this is an excellent and very fast mechanism since it will only sync the delta from the last snapshot.
I recently switched from traditional backups to this kind of distribution.

The snippet I use for syncing all running containers to another lxd (and vice versa):

for c in $(lxc list --format=json | jq -r '.[] | select(.state.status == "Running") | .name'); do
  echo "syncing $c ..."
  /usr/sbin/lxc copy $c $TARGET_LXD: --mode push --refresh
done

You may now argue to stop the container first, so just add a line before and after the copy command. I believe in zfs’s snapshot mechanism and may restore the last snapshot on the target before failover …
What would be a cool feature for creating online backups: copy --snapshots-only :wink:

Thank you @hi-ko . I will give this a shot this weekend and do a simulated failover.

I’m smart enough to know that I need to sync/backup but not smart enough to know the most efficient way yet :slight_smile:

On stopping containers… my containers are very low interactivity volume so doing stop/archive/start is not a problem from a user perspective. Especially if I cron them for early mornings. If there is any risk reduction I’ll probably keep that. Acknowledging the only minimal gains.

The idea is to use implicit logic instead of explicit commands:

  • create snapshots automatically - e.g. by profile:

config:
  snapshots.expiry: 4w
  snapshots.pattern: snapshot-%d
  snapshots.schedule: 0 */4 * * *
  snapshots.schedule.stopped: "false"
description: ""
devices: {}
name: autosnapshot-4w

snapshots.expiry will take care about removing outdated snapshots.

  • copy the full container using --refresh will sync the snapshots and from the last snapshot to last state. So no need to do any additional housekeeping about the snapshots to copy/delete.

OK, cool, so I don’t need to automate the snapshots myself via cron if I use the technique you mention above, just need to archive/copy them over to the backup machine. Is that corrrect?

#!/bin/bash
#
# For each project
#    sync all running containers to the backup LXD
#

TARGET_LXD=hp-elitedesk-23

for p in $(lxc project list --format=json | jq -r '.[] | .name'); do
  echo "syncing project $p..."
  lxc project switch $p
  for c in $(lxc list --format=json | jq -r '.[] | select(.state.status == "Running") | .name'); do
    echo "syncing project $p - $c ..."
    /snap/bin/lxc copy $c --target-project $p $TARGET_LXD: --mode push --refresh
  done
done

OK, cool, so I don’t need to automate the snapshots myself via cron if I use the technique you mention above, just need to archive/copy them over to the backup machine. Is that corrrect?

correct. You may add a simple mysqldump cron script inside the mariadb container as a fallback lifesaver.

nice side affect: you could even sometimes start the copy container for testing and the sync mechanism will still work since it will diff from last snapshot (overwriting any changes on the target).

Thank you @hi-ko

We have a bit about backup topics in our docs too:

https://linuxcontainers.org/lxd/docs/master/backup