Problem copy instance snapshot to another incus cluster ceph backend

hello,
I’m building two incus clusters , 3 nodes each , running ceph clusters on those nodes. Incus uses ceph driver only (RBD), no cephfs/cephobject.
Cluster A is primary , cluster B is meant to be a backup, on remote location.

I run a test container (ubutes1) inside a node in cluster A (hostname osd1), take one snapshot, and trying to copy that snapshot to a node in cluster B (hostname cephtest1).
osd1 and cephtest1 is already in remote list of each other.
the container ubutes1 is running.
command :
root@osd1:~# incus copy --mode=push ubutes1/snap0 cephtest1:

Got error :

Error: Failed instance migration: Failed migration on source: Error from migration control target: Failed creating instance on target: Problem with ceph import-diff: ([exit status 74]) rbd: failed to decode diff banner
rbd: import-diff failed: (74) Bad message

Edit :

I checked the remote cluster, and the image is created on ceph. but no instance on incus.
If I repeat the command a second time, the error message is

Error: Failed instance creation: Error transferring instance data: Failed migration on target: Failed creating instance on target: Volume already exists on storage but not in database

appreciate any help , thank you

What version of Incus is used on source and target?

And version of Ceph too.

All nodes OS Debian 12 clean installation.

I’m using Incus LTS 6.0.1 , following installation guide here
https://github.com/zabbly/incus

ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable) , following installation guide on ceph.com docs. using cephadm and ceph-common installed from ceph repo.

I create the ceph pool rbdpool1-lz4 using ceph dashboard, and create incus storage following named incuspool1-rbd-lz4 following How to manage storage pools - Incus documentation for clustered storage.

default incus profile on both cluster is identical.

config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br5
    type: nic
  root:
    path: /
    pool: incuspool1-rbd-lz4
    type: disk
name: default
used_by: []
project: default

ceph cluster uses cluster network and public network. On this setup, the cluster network on each cluster is only between nodes, on a separate switch, not connected to the rest of the network. I guess all communication in incus context would be using public network.

Context :
I’ve been running scheme like this for 2 years using LXD 5.0 ZFS backend, no clustering, just host A local and Host B on remote location backup.
running some scripts that basically launch a container in host A, take snapshot periodically, and run lxc copy to transfer those snapshots to host B.
I want to run those methods on these Incus/Ceph clusters .