I am a relatively new admin of an LXD environment where I built several Drupal/MariaDB systems. Two simple ubuntu containers: a basic web server and database .
I’m trying to protect myself from disaster or human error by nightly (sure, sure, more likely weekly if this is manual) backups.
I have two servers that are nearly identical and only host the LXD containers. Prime = hp-elitedesk-20 while the Backup is hp-elitedesk-23
So my first question, is this a particularly bad method to avoid? Is there a better method?
Second, is there a way for me to copy the prime snapshot to the backup machine without deleting it from the backup first? Maybe copy the snapshot on the prime to a snapshot on the backup? Does this make sense? I might not be using the best language to convey the question.
the --refresh flag is what you may are for (even for the first copy).
--refresh Perform an incremental copy
Given you use zfs as storage backend and that you also use snapshots on the source (I setup autosnapshot profiles), this is an excellent and very fast mechanism since it will only sync the delta from the last snapshot.
I recently switched from traditional backups to this kind of distribution.
The snippet I use for syncing all running containers to another lxd (and vice versa):
for c in $(lxc list --format=json | jq -r '.[] | select(.state.status == "Running") | .name'); do
echo "syncing $c ..."
/usr/sbin/lxc copy $c $TARGET_LXD: --mode push --refresh
done
You may now argue to stop the container first, so just add a line before and after the copy command. I believe in zfs’s snapshot mechanism and may restore the last snapshot on the target before failover …
What would be a cool feature for creating online backups: copy --snapshots-only
Thank you @hi-ko . I will give this a shot this weekend and do a simulated failover.
I’m smart enough to know that I need to sync/backup but not smart enough to know the most efficient way yet
On stopping containers… my containers are very low interactivity volume so doing stop/archive/start is not a problem from a user perspective. Especially if I cron them for early mornings. If there is any risk reduction I’ll probably keep that. Acknowledging the only minimal gains.
snapshots.expiry will take care about removing outdated snapshots.
copy the full container using --refresh will sync the snapshots and from the last snapshot to last state. So no need to do any additional housekeeping about the snapshots to copy/delete.
OK, cool, so I don’t need to automate the snapshots myself via cron if I use the technique you mention above, just need to archive/copy them over to the backup machine. Is that corrrect?
#!/bin/bash
#
# For each project
# sync all running containers to the backup LXD
#
TARGET_LXD=hp-elitedesk-23
for p in $(lxc project list --format=json | jq -r '.[] | .name'); do
echo "syncing project $p..."
lxc project switch $p
for c in $(lxc list --format=json | jq -r '.[] | select(.state.status == "Running") | .name'); do
echo "syncing project $p - $c ..."
/snap/bin/lxc copy $c --target-project $p $TARGET_LXD: --mode push --refresh
done
done
OK, cool, so I don’t need to automate the snapshots myself via cron if I use the technique you mention above, just need to archive/copy them over to the backup machine. Is that corrrect?
correct. You may add a simple mysqldump cron script inside the mariadb container as a fallback lifesaver.
nice side affect: you could even sometimes start the copy container for testing and the sync mechanism will still work since it will diff from last snapshot (overwriting any changes on the target).