I have two ubuntu 22.04 with lxd 5.7 installed.
storage backend is zfs
My idea is to use the other lxd as a backup system.
When I run lxc copy container server2: --stateless --refresh
it seems lxd is always copying the full zfs from scratch (I checked with zfs list and the size is ~0 on the target once copy starts).
I have had similar experiences with Ubuntu 20.04 containers, and in my case, I find that a copy --refresh is somewhat hit and miss. For container file system changes (additions/deletions to files in the actual container), the --refresh typically just updates the remote copy (so it works as expected - a fast and elegant update), but underlying changes to the instance storage file system (e.g. deleting a zfs snapshots) can result in a from-scratch copy being created remotely, which as you know, takes a long time, especially over a WAN. It’s forcing me to rethnk some of my fail-over plans.
I see the full copy (transfering the size of the full volume) - even without any changes in the running container’s filesystem and also without any snapshots. Does this smart rsync logic only work if the container is down?
What happens if you create a snapshot on the source and then perform a --refresh copy?
The first time that happens the snapshot should be transferred in its entirety, and the 2nd time only the differences between the main volume and its latest snapshot should be transferred.
Ah - ok this makes a big difference!
Once I have at least one snapshot at the source lxc copy test <remote>: --refresh
transfers the diff only as expected. So --refresh copies actually the diff to the latest snapshot available?
I guess just a hint in the cli help (and in the docs) would be sufficiant.
Thanks for your help !
I already set up scripts as referenced above by doing handcrafted zfs restore but will now switch back to inbuild functionallity