Lxc copy --refresh always copies the full container?

I have two ubuntu 22.04 with lxd 5.7 installed.
storage backend is zfs

My idea is to use the other lxd as a backup system.
When I run
lxc copy container server2: --stateless --refresh
it seems lxd is always copying the full zfs from scratch (I checked with zfs list and the size is ~0 on the target once copy starts).

This is fine for smaller containers but I have also containers with some hundred GB storage. Isn’t there a smarter way, officialy supported? I found Lxc copy –refresh workaround: efficient incremental ZFS snapshot sync with send/receive which is creating and syncing snapshots similar to making online backups with ESX(i).

1 Like

Good question - I am tracking this thread now.

I have had similar experiences with Ubuntu 20.04 containers, and in my case, I find that a copy --refresh is somewhat hit and miss. For container file system changes (additions/deletions to files in the actual container), the --refresh typically just updates the remote copy (so it works as expected - a fast and elegant update), but underlying changes to the instance storage file system (e.g. deleting a zfs snapshots) can result in a from-scratch copy being created remotely, which as you know, takes a long time, especially over a WAN. It’s forcing me to rethnk some of my fail-over plans.

I see the full copy (transfering the size of the full volume) - even without any changes in the running container’s filesystem and also without any snapshots. Does this smart rsync logic only work if the container is down?

What happens if you create a snapshot on the source and then perform a --refresh copy?

The first time that happens the snapshot should be transferred in its entirety, and the 2nd time only the differences between the main volume and its latest snapshot should be transferred.

1 Like

Ah - ok this makes a big difference!
Once I have at least one snapshot at the source
lxc copy test <remote>: --refresh
transfers the diff only as expected. So --refresh copies actually the diff to the latest snapshot available?

That is correct, when using ZFS optimized transfer.

We could potentially improve this to fallback to rsync when no snapshots on target.

when no snapshots on target.

you mean when no snapshots on source ?

I guess just a hint in the cli help (and in the docs) would be sufficiant.

Thanks for your help !
I already set up scripts as referenced above by doing handcrafted zfs restore but will now switch back to inbuild functionallity :wink:

1 Like

Well yes, both, but the previously transferred snapshot on the target is used as the basis for the zfs receive for the main volume.