It’s not impossible to do but it’s just not done at this point.
Our migration protocol is somewhat simple and doesn’t allow much back and forth between source and target, more back and forth would be needed to first determine exactly what snapshots need to be transferred (source sends full list, target filters list based on what it has, send list back, then source would have to figure out nearest snapshot for each and send those), then a new temporary snapshot would need to be made on the source, sent, restored on target and deleted on both sides.
We would also need to add some fs details in the migration protocol so that part of the negotiation would be ensuring that both sides are actually the same base dataset as otherwise send/receive just can’t work.
Today, it’d actually be perfectly fine to do:
- Copy a container from a remote server with zfs on both sides (uses send/receive)
- Move the target container to a btrfs pool (converts everything to subvolumes)
- Move back to the zfs pool (converts everything back to datasets and snapshots)
- Do a refresh from the source
In this case, even though it’s still the same container and the same snapshots, the dataset itself isn’t the same on source and target, so send/receive cannot work at all. Since we use rsync, that’s fine, but if we were to support zfs, we’d need the extra data in the migration protocol so that we can detect it and switch to rsync for such cases.