Error: Error transferring instance data: Failed getting instance storage pool name: Instance storage pool not found

Could you please post the exact steps which lead to this error?

First copy the container to the remote offsite server:
lxc stop container
lxc copy container server:container --storage=storage --mode=relay
lxc start container

Then attempt to do a refresh:
lxc copy container server:container --storage=storage --mode=relay --refresh

My guess is this is related to the fact the containers have snapshots, which appear to be read only?

I cannot reproduce that.

Could you please also post how the containers and snapshots are created/deleted before copying or refreshing?

I’ll do some more tests on this tonight and see if I can come up with a full script.
FYI I’m running 5.0.0-b0287c1 both ends on 20.04LTS

1 Like

The plot thickens…

It appears I can’t reproduce it either with any new containers I create so it appears whatever is causing the issue is now fixed.

However my existing 15 containers which were affected by the original BTRFS problem all have the same error even though I freshly copied them across. (BTW It appears to be unrelated to the --storage option as even ones that are in the default pool both ends have the same issue).

Ahhh here we go…
I can’t delete them from the target either I get the same BTRFS property set error.

OK let me completely delete one of them, clean up and try a fresh copy from scratch again and see what happens.

OK I just did a fresh copy of one of my existing containers to the target.
Then went to delete it on the target and got the exact same BTRFS error as when I do a refresh copy.

lxc delete container
Error: Error deleting storage volume: Failed setting subvolume writable "/var/snap/lxd/common/lxd/storage-pools/default/containers/container": Failed to run: bt rfs property set -ts /var/snap/lxd/common/lxd/storage-pools/default/containers/container ro false: ERROR: Could not get subvolume flags: Invalid argument

So it is 100% related to this topic:

I will jump over and follow up on that topic so this one can be closed.

Up until now, I have always been very pleased with LXD, but I must say that such a bug perverting the integrity of a whole server and killing all containers at once is very disturbing. Fortunately, the affected server is a backup server, but yet…

I’m going to do like @DanielBull : remove everything from the backup server and redo the copies from the source aka live server.

Yes we are sorry about this, it has certainly exposed gaps in our automated testing, which have been improved now to hopefully catch these sorts of regression in the future.

@monstermunchkin has some further fixes for BTRFS optimized refresh due for LXD 5.1:

I did so too. After upgrading to version 5.1 i deleted all container on the target backup server and afterwards from the ‘lxc delete’ some left over zfs datasets belonging to the container.

After that i where able to do one successful ‘copy --refresh’. When i try to do another ‘copy --refresh’ from source to target machine i now get the folloing error:

/snap/bin/lxc copy --mode push --refresh --stateless --storage default --config boot.autostart=false archive virt-slave:archive
Error: Failed instance migration: websocket: close 1006 (abnormal closure): unexpected EOF

@monstermunchkin please could you help with this? Thanks

I’ll take a look at this.

So I had exactly the same issue as @DanielBull, with btrfs also. I had to manually delete the containers, their snapshots via btrfs, remove reference in LXD database (via lxd sql). Now, with 5.1, initial copy and subsequent refreshes work.

Thank you to LXD team and to Daniel for the advices and fixes !

This looks like the source server is crashing. Could you please enable debug log (snap set daemon.debug=true) and paste the log?

Hello Thomas,

when i execute the above copy command the following lines are written to the log:

https://gist.githubusercontent.com/robberer/6827ca05ab0c39a9c075aee70a57e2ff/raw/bda534e1a57e36e988cea206f3fff8ccc8fdb337/gistfile1.txt

I think your guess is right.

panic: runtime error: slice bounds out of range [:-1]
LXD failed with return code 2

2022-05-05T08:28:44+02:00 lxd.daemon[441075]: time="2022-05-05T08:28:44+02:00" level=debug msg="MigrateInstance started" args="&{archive [] {ZFS [migration_header compress]} true true false <nil>  false true}" instance=archive project=default
2022-05-05T08:28:44+02:00 lxd.daemon[441075]: time="2022-05-05T08:28:44+02:00" level=debug msg="MigrateInstance finished" args="&{archive [] {ZFS [migration_header compress]} true true false map[filesystem:dpool/lxd/containers/archive@migration-e049046a-1feb-49fa-87d8-0ec8b4b9fe44]  false true}" instance=archive project=default
2022-05-05T08:28:44+02:00 lxd.daemon[441075]: time="2022-05-05T08:28:44+02:00" level=debug msg="MigrateInstance started" args="&{archive [] {ZFS [migration_header compress]} true true true map[filesystem:dpool/lxd/containers/archive@migration-e049046a-1feb-49fa-87d8-0ec8b4b9fe44]  false true}" instance=archive project=default
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: time="2022-05-05T08:28:45+02:00" level=debug msg="MigrateInstance finished" args="&{archive [] {ZFS [migration_header compress]} true true true map[filesystem:dpool/lxd/containers/archive@migration-e049046a-1feb-49fa-87d8-0ec8b4b9fe44]  false true}" instance=archive project=default
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: panic: runtime error: slice bounds out of range [:-1]
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: goroutine 7832 [running]:
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: io.ReadAll({0x7f5de0516e70, 0xc0016a5d80})
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/snap/go/9605/src/io/io.go:646 +0x197
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: io/ioutil.ReadAll(...)
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/snap/go/9605/src/io/ioutil/ioutil.go:27
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: github.com/lxc/lxd/lxd/storage/drivers.(*zfs).MigrateVolume(0xc0004c2a50, {{0xc00127e470, 0x7}, {0xc000fe0af0, 0x7}, 0xc000f40360, {0x188a56e, 0xa}, {0x188a9ce, 0xa}, ...}, ...)
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/build/lxd/parts/lxd/src/lxd/storage/drivers/driver_zfs_volumes.go:1900 +0x672
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).MigrateInstance(0xc000377a40, {0x1baf778, 0xc0004c0000}, {0x1b96140, 0xc0016a5d80}, 0xc000a40f80, 0x0?)
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2176 +0x753
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: main.(*migrationSourceWs).Do(0xc000238a00, 0xc0003fe420, 0x0?)
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/build/lxd/parts/lxd/src/lxd/migrate_instance.go:712 +0x1850
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: main.instancePost.func5(0x0?)
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/build/lxd/parts/lxd/src/lxd/instance_post.go:315 +0x45
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc0003be360, 0xc00175a200?)
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/build/lxd/parts/lxd/src/lxd/operations/operations.go:280 +0x42
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
2022-05-05T08:28:45+02:00 lxd.daemon[441075]: 	/build/lxd/parts/lxd/src/lxd/operations/operations.go:279 +0x118
2022-05-05T08:28:45+02:00 lxd.daemon[440924]: => LXD failed with return code 2

Thanks, I’ve reopened Optimized refresh broken · Issue #10186 · lxc/lxd · GitHub with your stack trace.

With 5.2 still issues with the migration statment above. Same Error 1006.

Debug log on source:

2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url=/1.0 username=root
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: - (error decoding original message: message key "MESSAGE" truncated)
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url=/1.0/instances/archive username=root
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="WriteJSON\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"architecture\": \"x86_64\",\n\t\t\t\"config\": {\n\t\t\t\t\"boot.autostart\": \"true\",\n\t\t\t\t\"image.architecture\": \"amd64\",\n\t\t\t\t\"image.description\": \"Debian buster amd64 (20200819_05:24)\",\n\t\t\t\t\"image.os\": \"Debian\",\n\t\t\t\t\"image.release\": \"buster\",\n\t\t\t\t\"image.serial\": \"20200819_05:24\",\n\t\t\t\t\"image.type\": \"squashfs\",\n\t\t\t\t\"raw.apparmor\": \"mount fstype=nfs,\",\n\t\t\t\t\"security.nesting\": \"true\",\n\t\t\t\t\"security.privileged\": \"true\",\n\t\t\t\t\"snapshots.expiry\": \"1w\",\n\t\t\t\t\"snapshots.pattern\": \"snapshot-%d\",\n\t\t\t\t\"snapshots.schedule\": \"0 0 * * *\",\n\t\t\t\t\"snapshots.schedule.stopped\": \"false\",\n\t\t\t\t\"volatile.base_image\": \"aa68be27609634dffd06ef1510bd7ba97cd5d511d93992c0cb8b9e036cd7fa17\",\n\t\t\t\t\"volatile.eth0.host_name\": \"veth5698a127\",\n\t\t\t\t\"volatile.eth0.hwaddr\": \"00:16:3e:d9:2c:0e\",\n\t\t\t\t\"volatile.idmap.base\": \"0\",\n\t\t\t\t\"volatile.idmap.current\": \"[]\",\n\t\t\t\t\"volatile.idmap.next\": \"[]\",\n\t\t\t\t\"volatile.last_state.idmap\": \"[]\",\n\t\t\t\t\"volatile.last_state.power\": \"RUNNING\",\n\t\t\t\t\"volatile.uuid\": \"9d4b4276-ee24-424c-84ca-aa197bd2e0fa\"\n\t\t\t},\n\t\t\t\"devices\": {\n\t\t\t\t\"archive-data\": {\n\t\t\t\t\t\"path\": \"/data\",\n\t\t\t\t\t\"source\": \"/dpool/archive-data\",\n\t\t\t\t\t\"type\": \"disk\"\n\t\t\t\t},\n\t\t\t\t\"root\": {\n\t\t\t\t\t\"path\": \"/\",\n\t\t\t\t\t\"pool\": \"default\",\n\t\t\t\t\t\"size\": \"20GB\",\n\t\t\t\t\t\"type\": \"disk\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"ephemeral\": false,\n\t\t\t\"profiles\": [\n\t\t\t\t\"default\"\n\t\t\t],\n\t\t\t\"stateful\": false,\n\t\t\t\"description\": \"\",\n\t\t\t\"created_at\": \"2021-04-27T19:27:25.6299956+02:00\",\n\t\t\t\"expanded_config\": {\n\t\t\t\t\"boot.autostart\": \"true\",\n\t\t\t\t\"image.architecture\": \"amd64\",\n\t\t\t\t\"image.description\": \"Debian buster amd64 (20200819_05:24)\",\n\t\t\t\t\"image.os\": \"Debian\",\n\t\t\t\t\"image.release\": \"buster\",\n\t\t\t\t\"image.serial\": \"20200819_05:24\",\n\t\t\t\t\"image.type\": \"squashfs\",\n\t\t\t\t\"raw.apparmor\": \"mount fstype=nfs,\",\n\t\t\t\t\"security.nesting\": \"true\",\n\t\t\t\t\"security.privileged\": \"true\",\n\t\t\t\t\"snapshots.expiry\": \"1w\",\n\t\t\t\t\"snapshots.pattern\": \"snapshot-%d\",\n\t\t\t\t\"snapshots.schedule\": \"0 0 * * *\",\n\t\t\t\t\"snapshots.schedule.stopped\": \"false\",\n\t\t\t\t\"volatile.base_image\": \"aa68be27609634dffd06ef1510bd7ba97cd5d511d93992c0cb8b9e036cd7fa17\",\n\t\t\t\t\"volatile.eth0.host_name\": \"veth5698a127\",\n\t\t\t\t\"volatile.eth0.hwaddr\": \"00:16:3e:d9:2c:0e\",\n\t\t\t\t\"volatile.idmap.base\": \"0\",\n\t\t\t\t\"volatile.idmap.current\": \"[]\",\n\t\t\t\t\"volatile.idmap.next\": \"[]\",\n\t\t\t\t\"volatile.last_state.idmap\": \"[]\",\n\t\t\t\t\"volatile.last_state.power\": \"RUNNING\",\n\t\t\t\t\"volatile.uuid\": \"9d4b4276-ee24-424c-84ca-aa197bd2e0fa\"\n\t\t\t},\n\t\t\t\"expanded_devices\": {\n\t\t\t\t\"archive-data\": {\n\t\t\t\t\t\"path\": \"/data\",\n\t\t\t\t\t\"source\": \"/dpool/archive-data\",\n\t\t\t\t\t\"type\": \"disk\"\n\t\t\t\t},\n\t\t\t\t\"eth0\": {\n\t\t\t\t\t\"name\": \"eth0\",\n\t\t\t\t\t\"nictype\": \"bridged\",\n\t\t\t\t\t\"parent\": \"br0\",\n\t\t\t\t\t\"type\": \"nic\"\n\t\t\t\t},\n\t\t\t\t\"root\": {\n\t\t\t\t\t\"path\": \"/\",\n\t\t\t\t\t\"pool\": \"default\",\n\t\t\t\t\t\"size\": \"20GB\",\n\t\t\t\t\t\"type\": \"disk\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"name\": \"archive\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"last_used_at\": \"2022-04-17T06:51:38.346975063Z\",\n\t\t\t\"location\": \"none\",\n\t\t\t\"type\": \"container\",\n\t\t\t\"project\": \"default\"\n\t\t}\n\t}" http_code=200
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url=/1.0/events username=root
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="Event listener server handler started" id=e369979f-4296-491c-81c4-69a961fab040 local=/var/snap/lxd/common/lxd/unix.socket remote=@
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="Handling API request" ip=@ method=POST protocol=unix url=/1.0/instances/archive username=root
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="API Request\n\t{\n\t\t\"name\": \"\",\n\t\t\"migration\": true,\n\t\t\"live\": false,\n\t\t\"instance_only\": false,\n\t\t\"container_only\": false,\n\t\t\"target\": {\n\t\t\t\"certificate\": \"-----BEGIN CERTIFICATE-----\\nMIICDzCCAZagAwIBAgIRAPH4uf7uzU3XsVWyLIC5l/MwCgYIKoZIzj0EAwMwODEc\\nMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEYMBYGA1UEAwwPcm9vdEBWSVJU\\nLVNMQVZFMB4XDTIxMDQxODEyMzkxNVoXDTMxMDQxNjEyMzkxNVowODEcMBoGA1UE\\nChMTbGludXhjb250YWluZXJzLm9yZzEYMBYGA1UEAwwPcm9vdEBWSVJULVNMQVZF\\nMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEgktEB6bBZo2AI3a4kTTiLU0Xy9lWaKAx\\nKyYul4VIOpiozD2pdJdapaJgVlFz2VCpdCdekEphASMx6n57N/GqeWPhen0jZ2qZ\\nelZCVIysHoEx73zlVjkIjITW1rP+Wswgo2QwYjAOBgNVHQ8BAf8EBAMCBaAwEwYD\\nVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAtBgNVHREEJjAkggpWSVJU\\nLVNMQVZFhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2cAMGQC\\nMDGe829FEhXv91uAIML+1hb4qqHBr0mEwHv7WDnAc8LuwzlYOC0+9Nea0rrI7UDz\\nJAIwFIJwoqfSUbtN5uMa0gwMiRj8ZPKXgEAL7Z4L12pUP8bgXDhGnAZyL9t9Ydew\\nwwgG\\n-----END CERTIFICATE-----\\n\",\n\t\t\t\"operation\": \"https://172.16.88.2:8443/1.0/operations/8d1bbc02-c309-4331-aede-b40053de7003\",\n\t\t\t\"secrets\": {\n\t\t\t\t\"control\": \"6059ad24c6d0529ccfad3c3c87861f001938e5f535bb14c0078b668738624da0\",\n\t\t\t\t\"fs\": \"ed86fd834b0dbddd8ef59e8873e5dce755f21908e077e463a4839ffe11bf9500\"\n\t\t\t}\n\t\t},\n\t\t\"pool\": \"\",\n\t\t\"project\": \"\",\n\t\t\"allow_inconsistent\": false\n\t}" ip=@ method=POST protocol=unix url=/1.0/instances/archive username=root
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="New task Operation: c658d360-43d5-4f76-87b8-1184601c8c40"
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="Started task operation: c658d360-43d5-4f76-87b8-1184601c8c40"
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=info msg="Waiting for migration channel connections"
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=info msg="Migration channels connected"
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="WriteJSON\n\t{\n\t\t\"type\": \"async\",\n\t\t\"status\": \"Operation created\",\n\t\t\"status_code\": 100,\n\t\t\"operation\": \"/1.0/operations/c658d360-43d5-4f76-87b8-1184601c8c40\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"c658d360-43d5-4f76-87b8-1184601c8c40\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Migrating instance\",\n\t\t\t\"created_at\": \"2022-06-06T17:02:26.201844461+02:00\",\n\t\t\t\"updated_at\": \"2022-06-06T17:02:26.201844461+02:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/archive\"\n\t\t\t\t],\n\t\t\t\t\"instances\": [\n\t\t\t\t\t\"/1.0/instances/archive\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}" http_code=202
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url=/1.0/operations/c658d360-43d5-4f76-87b8-1184601c8c40 username=root
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="WriteJSON\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"c658d360-43d5-4f76-87b8-1184601c8c40\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Migrating instance\",\n\t\t\t\"created_at\": \"2022-06-06T17:02:26.201844461+02:00\",\n\t\t\t\"updated_at\": \"2022-06-06T17:02:26.201844461+02:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/archive\"\n\t\t\t\t],\n\t\t\t\t\"instances\": [\n\t\t\t\t\t\"/1.0/instances/archive\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}" http_code=200
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="MigrateInstance started" args="&{0 archive [] {ZFS [migration_header compress]} true true false <nil>  false true <nil>}" instance=archive project=default
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="MigrateInstance finished" args="&{0 archive [] {ZFS [migration_header compress]} true true false map[filesystem:dpool/lxd/containers/archive@migration-77b8bd00-cd07-47e8-a0cf-ccf4ea3b3e72]  false true <nil>}" instance=archive project=default
2022-06-06T17:02:26+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:26+02:00" level=debug msg="MigrateInstance started" args="&{0 archive [] {ZFS [migration_header compress]} true true true map[filesystem:dpool/lxd/containers/archive@migration-77b8bd00-cd07-47e8-a0cf-ccf4ea3b3e72]  false true <nil>}" instance=archive project=default
2022-06-06T17:02:27+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:27+02:00" level=debug msg="MigrateInstance finished" args="&{0 archive [] {ZFS [migration_header compress]} true true true map[filesystem:dpool/lxd/containers/archive@migration-77b8bd00-cd07-47e8-a0cf-ccf4ea3b3e72]  false true <nil>}" instance=archive project=default
2022-06-06T17:02:27+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:27+02:00" level=debug msg="Failure for task operation: c658d360-43d5-4f76-87b8-1184601c8c40: Failed reading migration header: websocket: close 1006 (abnormal closure): unexpected EOF"
2022-06-06T17:02:27+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:27+02:00" level=debug msg="Event listener server handler stopped" listener=e369979f-4296-491c-81c4-69a961fab040 local=/var/snap/lxd/common/lxd/unix.socket remote=@
2022-06-06T17:02:27+02:00 lxd.daemon[2145677]: time="2022-06-06T17:02:27+02:00" level=error msg="Failed closing listener connection" err="close unix /var/snap/lxd/common/lxd/unix.socket->@: use of closed network connection" listener=e369979f-4296-491c-81c4-69a961fab040

Can you show lxc config show <instance> --expanded and lxc info <instance> on both the source and the target.

I added a PR the other day that will improve the reporting on the specific error from the target:

Are you able to try this on latest/edge snap channel to see if it reports on a specific error?

Can I switch back to a release after using latest/edge snap channel ?