Connection reset by peer while incus (volume) copy

Hello everybody and a happy new year!

I experienced a problem when I try to copy (backup) an incus storage volume between two IncusOS hosts.

I want to copy a custom volume “opencloudData” from my server to a backup server.

incus --debug storage volume copy server:storage/opencloudData backup:storage/opencloudData
Error: Failed storage volume creation: read tcp 192.168.1.152:51091->192.168.1.186:8443: read: connection reset by peer
incus monitor --type=logging
location: none
metadata:
  context:
    class: websocket
    description: Migrating storage volume
    operation: 055adf9f-edc3-4c4e-b5bc-532badae6023
    project: default
  level: debug
  message: Updated metadata for opeation
timestamp: "2026-01-02T10:26:14.097909404Z
type: logging

Error: read tcp 192.168.1.152:81236->192.168.1.13:8443: read: connection reset by peer

Both systems run incus version 6.20 and IncusOS 202512250102.

I tried it multiple times, and with other storage volumes. The volume size is 311.1 GiB. I rebooted both systems. They a connection both to the same network switch.

I can see the opencloudData volume in the webUI of the backup system with around 312.3 GiB. I tried to do a refresh:

incus --debug storage volume copy server:storage/opencloudData backup:storage/opencloudData
DEBUG  [2026-01-02T12:48:46+01:00] 
	{
		"id": "8ff2d488-c771-4102-a4c5-91bfc8cdab94",
		"class": "task",
		"description": "Copying storage volume",
		"created_at": "2026-01-02T11:48:46.732252295Z",
		"updated_at": "2026-01-02T11:48:46.732252295Z",
		"status": "Running",
		"status_code": 103,
		"resources": {
			"storage_volumes": [
				"/1.0/storage-pools/storage/volumes/custom/opencloudData"
			]
		},
		"metadata": {},
		"may_cancel": false,
		"err": "",
		"location": "none"
	} 
Error: Failed storage volume creation: Error transferring storage volume: Failed receiving volume "default_opencloudData": Failed to run: zfs receive -x mountpoint -F -u storagePool/storage/custom/default_opencloudData: exit status 1 (cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one)

Thanks in advance!

Can you get the incus admin os debug log from both servers?

Given they’re on the same switch, an actual network issue is somewhat unlikely.
More likely would be a crash of the Incus process on one of the two servers causing the transfer failure…

I found the problem, or I think I found the problem. I have not received the encryption key for the storage pool of the backup server. After:

incus admin os system security show

it worked.

I append the logs you asked for:

The backup-server:

[2026/01/02 11:17:35 CET] systemd-tmpfiles: /usr/lib/tmpfiles.d/legacy.conf:14: Duplicate line for path "/run/lock", ignoring.
[2026/01/02 11:17:35 CET] systemd: systemd-tmpfiles-clean.service: Deactivated successfully.
[2026/01/02 11:17:35 CET] systemd: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories.
[2026/01/02 11:17:35 CET] kernel: audit: type=1130 audit(1767349055.923:410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[2026/01/02 11:17:35 CET] kernel: audit: type=1131 audit(1767349055.923:411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[2026/01/02 11:32:39 CET] smartd: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 79 to 76
[2026/01/02 12:02:35 CET] smartd: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 76 to 75
[2026/01/02 12:48:47 CET] incusd: time="2026-01-02T11:48:47Z" level=error msg="Error during migration sink" err="Failed receiving volume \"default_opencloudData\": Failed to run: zfs receive -x mountpoint -F -u storagePool/storage/custom/default_opencloudData: exit status 1 (cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one)"
[2026/01/02 16:02:35 CET] smartd: Device: /dev/sda [SAT], SMART Usage Attribute: 
194 Temperature_Celsius changed from 75 to 74

and the server:

[2026/01/02 12:01:04 CET] kernel: audit: type=1131 audit(1767351664.744:1723): pid=4271 uid=0 auid=4294967295 ses=4294967295 subj=incus-monitor_</var/lib/incus>//&:incus-monitor_<var-lib-incus>:unconfined msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[2026/01/02 12:01:13 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:02:13 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:03:13 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:04:14 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:05:14 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:06:14 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:07:14 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:08:15 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:09:15 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:10:15 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:11:15 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:12:15 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:13:16 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:13:45 CET] tailscaled: 2026/01/02 11:13:45 control: [v1] new network map (periodic):
[2026/01/02 12:13:45 CET] tailscaled: netmap: self: [<redacted>] auth=machine-authorized u=<redacted> [<redacted>/32 <redacted>/128]
[2026/01/02 12:14:16 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:15:16 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:16:17 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:17:17 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:18:17 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:19:17 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:20:18 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:21:18 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:22:18 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:23:18 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:24:18 CET] tailscaled: 2026/01/02 11:24:18 control: [v1] new network map (periodic):
[2026/01/02 12:24:18 CET] tailscaled: netmap: self: [<redacted>] auth=machine-authorized u=<redacted> [<redacted>/32 <redacted>/128]
[2026/01/02 12:24:19 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:25:19 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:26:19 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:27:19 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:28:20 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:28:24 CET] tailscaled: 2026/01/02 11:28:24 control: [v
[2026/01/02 12:28:24 CET] tailscaled: JSON]1{"controltime":"2026-01-02T11:28:24.514392427Z"}
[2026/01/02 12:29:20 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:29:29 CET] tailscaled: 2026/01/02 11:29:29 control: [v1] new network map (periodic):
[2026/01/02 12:29:29 CET] tailscaled: netmap: self: [<redacted>] auth=machine-authorized u=<redacted> [<redacted>/32 <redacted>/128]
[2026/01/02 12:30:15 CET] smartd: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 72 to 73
[2026/01/02 12:30:15 CET] smartd: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 72 to 74
[2026/01/02 12:30:20 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:31:20 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:32:21 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:33:21 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:34:21 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:34:30 CET] tailscaled: 2026/01/02 11:34:30 control: [v1] new network map (periodic):
[2026/01/02 12:34:30 CET] tailscaled: netmap: self: [<redacted>] auth=machine-authorized u=<redacted> [<redacted>/32 <redacted>/128]
[2026/01/02 12:35:21 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:36:21 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:37:22 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:38:22 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:39:22 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:40:22 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:41:23 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:42:23 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:43:23 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:44:23 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:44:41 CET] tailscaled: 2026/01/02 11:44:41 control: [v1] new network map (periodic):
[2026/01/02 12:44:41 CET] tailscaled: netmap: self: [<redacted>] auth=machine-authorized u=<redacted> [<redacted>/32 <redacted>/128]
[2026/01/02 12:45:24 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 [v1] Accept: UDP{<redacted> > <redacted>} 71 ok out
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 wg: [v2] [<redacted>] - Sending handshake initiation
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 magicsock: adding connection to derp-26 for [<redacted>]
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 magicsock: 2 active derp conns: derp-4=cr1h45m0s,wr1h22m0s derp-26=cr0s,wr0s

[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 control: [v1] PollNetMap: stream=false ep=[<redacted> <redacted> <redacted> <redacted> <redacted>]
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 wg: [v2] [<redacted>] - Received handshake response
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 wg: [v2] [<redacted>] - Received handshake response
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 wg: [v2] [<redacted>] - Sending keepalive packet
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 control: [v1] successful lite map update in 9ms
[2026/01/02 12:45:53 CET] tailscaled: 2026/01/02 11:45:53 netcheck: [v1] report: udp=true v6=true mapvarydest=true portmap=? v4a=<redacted> v6a=<redacted> derp=4 derpdist=4v4:14ms,4v6:9ms,14v4:11ms,14v6:9ms,26v4:14ms
[2026/01/02 12:46:03 CET] tailscaled: 2026/01/02 11:46:03 wg: [v2] [<redacted>] - Sending keepalive packet
[2026/01/02 12:46:03 CET] tailscaled: 2026/01/02 11:46:03 wg: [v2] [<redacted>] - Receiving keepalive packet
[2026/01/02 12:46:13 CET] tailscaled: 2026/01/02 11:46:13 netcheck: [v1] report: udp=true v6=true mapvarydest=true portmap=? v4a=<redacted> v6a=<redacted> derp=4 derpdist=4v4:15ms,4v6:9ms,14v4:12ms,14v6:10ms,26v4:15ms
[2026/01/02 12:46:24 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:46:33 CET] tailscaled: 2026/01/02 11:46:33 netcheck: [v1] report: udp=true v6=true mapvarydest=true portmap=? v4a=<redacted> v6a=<redacted> derp=4 derpdist=4v4:15ms,4v6:9ms,14v4:12ms,14v6:10ms,26v4:15ms
[2026/01/02 12:46:59 CET] tailscaled: 2026/01/02 11:46:59 netcheck: [v1] report: udp=true v6=true mapvarydest=true portmap=? v4a=<redacted> v6a=<redacted> derp=4 derpdist=4v4:14ms,4v6:8ms,14v4:11ms,14v6:9ms,26v4:15ms
[2026/01/02 12:47:08 CET] tailscaled: 2026/01/02 11:47:08 magicsock: closing connection to derp-26 (idle), age 1m15s
[2026/01/02 12:47:08 CET] tailscaled: 2026/01/02 11:47:08 magicsock: 1 active derp conns: derp-4=cr1h47m0s,wr1m0s
[2026/01/02 12:47:24 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:48:24 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:49:25 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:50:25 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:51:25 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:52:25 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:53:26 CET] kernel: overlayfs: fs on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs' does not support file handles, falling back to xino=off.
[2026/01/02 12:54:25 CET] tailscaled: 2026/01/02 11:54:25 control: [v
[2026/01/02 12:54:25 CET] tailscaled: JSON]1{"controltime":"2026-01-02T11:54:25.86889241Z"}
[2026/01/02 12:54:25 CET] tailscaled: 2026/01/02 11:54:25 control: [v1] new network map (periodic):
[2026/01/02 12:54:25 CET] tailscaled: netmap: self: [<redacted>] auth=machine-authorized u=<redacted> [<redacted>/32 <redacted>/128]

I will test with another storage volume. But if that was the error, maybe the error message can be improved.

Okay strange. I tried different volumes. The initial opencloudData Volume worked after retrieving the encryption keys. I tried two others. One worked as well, the other does not. All around 250-350 GiB.

Edit:

incus storage volume copy --refresh

does not work for any of the copied volumes.

Error message:

incus storage volume copy Server:storage/opencloudData Backup:storage/opencloudData --refresh 
Error: Failed storage volume creation: Error transferring storage volume: Volume exists in database but not on storage

After a bit more testing I figured out that copying a snapshot of a volume works most of the time. But I cannot update the volume with the --refresh command. That makes backups of the internet very slow. Anyone another idea what to try?

The Volume exists in database but not on storage suggests a bad failure/cleanup code path on our end.

Can you describe what you’re seeing now? Because you mentioned that copying a snapshot works, are you copying it to a volume of the same name as the one that reported the Volume exists error?

Basically just trying to figure out the actual state of the source and target so we can work out what may need changing.

Ok, I am not quite sure if I understand exactly what you need. I provide my findings in the following text. If you need more specific informations just let me know.

I tried all the following operations with 3 different volumes:

  1. haBackup (size=744 KiB)
  2. absData (size=221.9 GiB)
  3. opencloudData (sze=311.2GiB)

Copy storage volume:

incus storage volume copy Server:storage/haBackup Backup:storage/haBackup -v      
Storage volume copied successfully!
incus storage volume copy Server:storage/absData Backup:storage/absData
Storage volume copied successfully!
incus storage volume copy Server:storage/opencloudData Backup:storage/opencloudData
Error: Failed storage volume creation: read tcp 192.168.1.152:49202->192.168.1.186:8443: read: connection reset by peer

but the opencloud volume copy was successful with the snapshot method:

storage volume copy Server:storage/opencloudData/250104_01 Backup:storage/opencloudData

With the small volume (haBackup) it worked all the time. With the bigger ones, it fails from time to time. Sometimes the copy process via snapshot fails, sometimes without the snapshot. I could not find any rules.

Try to update storage volume with –refresh:

incus incus storage volume copy Server:storage/haBackup Backup:storage/haBackup --refresh
Storage volume copied successfully!
incus storage volume copy Server:storage/absData Backup:storage/absData --refresh
Storage volume copied successfully!

But I saw it failing yesterday, with the absData Volume.

Never saw it working with the opencloudData Volume. There I always get the following error message.

incus storage volume copy Server:storage/opencloudData/250104_01 Backup:storage/opencloudData --refresh
Error: Failed storage volume creation: Error transferring storage volume: Failed receiving volume "default_opencloudData": Failed to run: zfs receive -x mountpoint -F -u storagePool/storage/custom/default_opencloudData: exit status 1 (cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one)

Conclusion:

Okay, as far as I understand, the bigger the volume the more likely it will fail. The initial copy process completes successfully most of the time (via explicit snapshot or just copy).
But when I try to send incremental backup, it fails sometimes with the last shown error message:
”zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one”.

I am a bit confused. Today is the first day, that a refresh with the absData-Volume was successful.

Hope this helps.

Additional Informations / Storage configuration:

Server

- devices:

  • /dev/disk/by-id/wwn-0x*1
- /dev/disk/by-id/wwn-0x\*2

- /dev/disk/by-id/wwn-0x\*3

encryption_key_status: available

name: storagePool

pool_allocated_space_in_bytes: 1.790579130368e+12

raw_pool_size_in_bytes: 3.985729650688e+12

state: ONLINE

type: zfs-raid1

usable_pool_size_in_bytes: 3.985729650688e+12

volumes:

- name: storage

  quota_in_bytes: 0

  usage_in_bytes: 1.790474498048e+12

  use: incus
Backup

- devices:

- /dev/disk/by-id/nvme-WD_Red_SN700_2000GB\_\*1

- /dev/disk/by-id/nvme-WD_Red_SN700_2000GB\_\*2

- /dev/disk/by-id/nvme-WD_Red_SN700_2000GB\_\*3

encryption_key_status: available

name: storagePool

pool_allocated_space_in_bytes: 1.69480896512e+11

raw_pool_size_in_bytes: 5.995774345216e+12

state: ONLINE

type: zfs-raidz1

usable_pool_size_in_bytes: 3.993279397888e+12

volumes:

- name: storage

  quota_in_bytes: 0

  usage_in_bytes: 1.12866406048e+11

  use: incus
# Backup
incus storage show storage                                                           
config:
  source: storagePool/storage
  volatile.initial_source: storagePool/storage
  zfs.pool_name: storagePool/storage
description: ""
name: storage
driver: zfs
used_by:
- /1.0/storage-pools/storage/volumes/custom/absData
- /1.0/storage-pools/storage/volumes/custom/haBackup
status: Created
locations:
- none
# Server
config:
  source: storagePool/storage
  volatile.initial_source: storagePool/storage
  zfs.pool_name: storagePool/storage
description: ""
name: storage
driver: zfs
used_by:
- /1.0/instances/abs
- /1.0/instances/opencloud
- /1.0/storage-pools/storage/volumes/custom/absData
- /1.0/storage-pools/storage/volumes/custom/haBackup
- /1.0/storage-pools/storage/volumes/custom/immichData
- /1.0/storage-pools/storage/volumes/custom/opencloudData
status: Created
locations:
- none

Update on my testing:

I tried to use a different copy method:

Again I used the opencloudData Volume, which a now has a size of 342.8 GiB.

incus storage volume copy Server:storage/opencloudData Backup:storage/opencloudData --mode push
Storage volume copied successfully!

Surprisingly this worked this time.

But when I try to send an incremental backup: It fails, but this time with a different error message:

incus storage volume copy Server:storage/opencloudData Backup:storage/opencloudData --mode push -v

DEBUG  [2026-01-08T14:47:04+01:00] 
	{
		"id": "ffb43d0f-1f45-4fcc-ae98-ea06f39c02b4",
		"class": "task",
		"description": "Migrating storage volume",
		"created_at": "2026-01-08T13:47:04.696300963Z",
		"updated_at": "2026-01-08T13:47:04.696300963Z",
		"status": "Running",
		"status_code": 103,
		"resources": {
			"storage_volumes": [
				"/1.0/storage-pools/storage/volumes/custom/opencloudData"
			]
		},
		"metadata": {},
		"may_cancel": false,
		"err": "",
		"location": "none"
	} 
Error: Failed storage volume creation: Failed reading migration index header: websocket: close 1006 (abnormal closure): unexpected EOF

If i try with pull mode:

incus storage volume copy Server:storage/opencloudData Backup:storage/opencloudData --refresh --debug -v   

DEBUG  [2026-01-08T14:52:02+01:00] 
	{
		"id": "ef115534-0f90-46b4-a380-f6f7ca200456",
		"class": "task",
		"description": "Copying storage volume",
		"created_at": "2026-01-08T13:52:02.773388155Z",
		"updated_at": "2026-01-08T13:52:02.773388155Z",
		"status": "Running",
		"status_code": 103,
		"resources": {
			"storage_volumes": [
				"/1.0/storage-pools/storage/volumes/custom/opencloudData"
			]
		},
		"metadata": {},
		"may_cancel": false,
		"err": "",
		"location": "none"
	} 
Error: Failed storage volume creation: Error transferring storage volume: Volume exists in database but not on storage

I am running out of ideas now. Has anyone else experienced such behavior? Is this not the right way to make backups?

The unexpected EOF usually indicates an Incus crash. This then would have left some DB entries behind which is why the second transfer fails.

Can you look at the system log on both source and target servers to see if that captures an Incus crash stacktrace?

As far as is understand there is no incus crash. I append the two log files:

These are logs after the --refreshcmd.

The log of the Backup-Server:

[2026/01/08 16:39:56 CET] systemd: Unmounting boot.mount - EFI System Partition Automount...
[2026/01/08 16:39:56 CET] systemd: boot.mount: Deactivated successfully.
[2026/01/08 16:39:56 CET] systemd: Unmounted boot.mount - EFI System Partition Automount.
[2026/01/08 16:39:56 CET] incusd: time="2026-01-08T15:39:56Z" level=error msg="Error during migration sink" err="Volume exists in database but not on storage"
[2026/01/08 16:39:56 CET] incusd: time="2026-01-08T15:39:56Z" level=warning msg="Failed closing connection" err="tls: failed to send closeNotify alert (but connection was closed anyway): write tcp 192.168.1.186:8443->192.168.1.195:64502: write: broken pipe"

The log of the Server:

[2026/01/08 16:42:59 CET] incusd: time="2026-01-08T15:42:59Z" level=warning msg="Failed closing connection" err="tls: failed to send closeNotify alert (but connection was closed anyway): write tcp 192.168.1.13:8443->192.168.1.195:64679: write: broken pipe"

Maybe try running incus monitor --pretty while trying to get the EOF to show up again. EOF is not a normal Incus behavior, our code basically doesn’t allow for that to happen in a normal code path, so a Go panic must have happened somewhere for this to show up. (Well, that or an actual network failure, but that’s not super likely here)

The operation I tried to perform:

incus storage volume copy Server:storage/opencloudData Backup:storage/opencloudData --mode push -v --debug --refresh

The error I get on after the copy command:

Error: Failed storage volume creation: Failed reading migration index header: websocket: close 1006 (abnormal closure): unexpected EOF

The monitor outputs:

Backup:

DEBUG  [2026-01-09T09:31:55Z] Event listener server handler started         id=c8abc50f-1177-44f1-aee7-da6da02a449a local="192.168.1.186:8443" remote="192.168.1.195:60696"
DEBUG  [2026-01-09T09:33:35Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:33:35Z] Handling API request                          ip="192.168.1.195:60776" method=GET protocol=tls url=/1.0 username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:33:35Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:33:35Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:33:35Z] Handling API request                          ip="192.168.1.195:60778" method=GET protocol=tls url=/1.0/events username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:33:35Z] Event listener server handler started         id=53d4f392-f79c-4657-bc14-a27a71aa0406 local="192.168.1.186:8443" remote="192.168.1.195:60778"
DEBUG  [2026-01-09T09:33:35Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:33:35Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:33:35Z] Handling API request                          ip="192.168.1.195:60779" method=POST protocol=tls url=/1.0/storage-pools/storage/volumes/custom username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:33:35Z] New operation                                 class=websocket description="Creating storage volume" operation=74e1d980-c114-4d3b-80b6-bc863dec9ed0 project=default
DEBUG  [2026-01-09T09:33:35Z] Started operation                             class=websocket description="Creating storage volume" operation=74e1d980-c114-4d3b-80b6-bc863dec9ed0 project=default
INFO   [2026-01-09T09:33:35Z] Waiting for migration connections on target   pool=storage project=default push=true volume=opencloudData
INFO   [2026-01-09T09:33:35Z] ID: 74e1d980-c114-4d3b-80b6-bc863dec9ed0, Class: websocket, Description: Creating storage volume  CreatedAt="2026-01-09 09:33:35.301155422 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[control:da09caa11f40c7cb29606593f6e352ebb99cb06a3bdadfd89076875682f61403 fs:016c01a65384489dd3440cf305cba3e18749017abfb04411e9fa3189ac57b68c]" Resources="map[storage_volumes:[/1.0/storage-pools/storage/volumes/custom/opencloudData]]" Status=Running StatusCode=Running UpdatedAt="2026-01-09 09:33:35.301155422 +0000 UTC"
INFO   [2026-01-09T09:33:35Z] ID: 74e1d980-c114-4d3b-80b6-bc863dec9ed0, Class: websocket, Description: Creating storage volume  CreatedAt="2026-01-09 09:33:35.301155422 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[control:da09caa11f40c7cb29606593f6e352ebb99cb06a3bdadfd89076875682f61403 fs:016c01a65384489dd3440cf305cba3e18749017abfb04411e9fa3189ac57b68c]" Resources="map[storage_volumes:[/1.0/storage-pools/storage/volumes/custom/opencloudData]]" Status=Pending StatusCode=Pending UpdatedAt="2026-01-09 09:33:35.301155422 +0000 UTC"
DEBUG  [2026-01-09T09:33:35Z] Allowing untrusted GET                        ip="192.168.1.13:55278" url="/1.0/operations/74e1d980-c114-4d3b-80b6-bc863dec9ed0/websocket?secret=da09caa11f40c7cb29606593f6e352ebb99cb06a3bdadfd89076875682f61403"
DEBUG  [2026-01-09T09:33:35Z] Connecting to operation                       class=websocket description="Creating storage volume" operation=74e1d980-c114-4d3b-80b6-bc863dec9ed0 project=default
DEBUG  [2026-01-09T09:33:35Z] Connected to operation                        class=websocket description="Creating storage volume" operation=74e1d980-c114-4d3b-80b6-bc863dec9ed0 project=default
DEBUG  [2026-01-09T09:33:35Z] Connecting to operation                       class=websocket description="Creating storage volume" operation=74e1d980-c114-4d3b-80b6-bc863dec9ed0 project=default
DEBUG  [2026-01-09T09:33:35Z] Allowing untrusted GET                        ip="192.168.1.13:55284" url="/1.0/operations/74e1d980-c114-4d3b-80b6-bc863dec9ed0/websocket?secret=016c01a65384489dd3440cf305cba3e18749017abfb04411e9fa3189ac57b68c"
INFO   [2026-01-09T09:33:35Z] Migration channels connected on target        pool=storage project=default push=true volume=opencloudData
DEBUG  [2026-01-09T09:33:35Z] Connected to operation                        class=websocket description="Creating storage volume" operation=74e1d980-c114-4d3b-80b6-bc863dec9ed0 project=default
DEBUG  [2026-01-09T09:33:35Z] CreateCustomVolumeFromMigration started       args="{IndexHeaderVersion:1 Name:opencloudData Description: Config:map[size:500GiB volatile.idmap.last:[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}] volatile.idmap.next:[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]] Snapshots:[] MigrationType:{FSType:ZFS Features:[migration_header compress header_zvol_filesystems]} TrackProgress:true Refresh:true RefreshExcludeOlder:false Live:false VolumeSize:0 ContentType:filesystem VolumeOnly:false ClusterMoveSourceName: StoragePool:}" driver=zfs pool=storage project=default volName=opencloudData
INFO   [2026-01-09T09:33:35Z] Migration channels disconnected on target     pool=storage project=default push=true volume=opencloudData
DEBUG  [2026-01-09T09:33:35Z] CreateCustomVolumeFromMigration finished      args="{IndexHeaderVersion:1 Name:opencloudData Description: Config:map[size:500GiB volatile.idmap.last:[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}] volatile.idmap.next:[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]] Snapshots:[] MigrationType:{FSType:ZFS Features:[migration_header compress header_zvol_filesystems]} TrackProgress:true Refresh:true RefreshExcludeOlder:false Live:false VolumeSize:0 ContentType:filesystem VolumeOnly:false ClusterMoveSourceName: StoragePool:}" driver=zfs pool=storage project=default volName=opencloudData
ERROR  [2026-01-09T09:33:35Z] Error during migration sink                   err="Volume exists in database but not on storage"
DEBUG  [2026-01-09T09:33:35Z] Failure for operation                         class=websocket description="Creating storage volume" err="Error transferring storage volume: Volume exists in database but not on storage" operation=74e1d980-c114-4d3b-80b6-bc863dec9ed0 project=default
INFO   [2026-01-09T09:33:35Z] ID: 74e1d980-c114-4d3b-80b6-bc863dec9ed0, Class: websocket, Description: Creating storage volume  CreatedAt="2026-01-09 09:33:35.301155422 +0000 UTC" Err="Error transferring storage volume: Volume exists in database but not on storage" Location=none MayCancel=false Metadata="map[control:da09caa11f40c7cb29606593f6e352ebb99cb06a3bdadfd89076875682f61403 fs:016c01a65384489dd3440cf305cba3e18749017abfb04411e9fa3189ac57b68c]" Resources="map[storage_volumes:[/1.0/storage-pools/storage/volumes/custom/opencloudData]]" Status=Failure StatusCode=Failure UpdatedAt="2026-01-09 09:33:35.301155422 +0000 UTC"
DEBUG  [2026-01-09T09:33:35Z] Event listener server handler stopped         listener=53d4f392-f79c-4657-bc14-a27a71aa0406 local="192.168.1.186:8443" remote="192.168.1.195:60778"
WARNING[2026-01-09T09:33:35Z] Failed closing connection                     err="tls: failed to send closeNotify alert (but connection was closed anyway): write tcp 192.168.1.186:8443->192.168.1.195:60778: write: broken pipe"

Server:

DEBUG  [2026-01-09T09:36:14Z] Event listener server handler started         id=cd5010d1-383b-4573-a857-e95db8e0df9d local="192.168.1.13:8443" remote="192.168.1.195:60896"
DEBUG  [2026-01-09T09:36:24Z] Handling API request                          ip="192.168.1.195:60902" method=GET protocol=tls url=/1.0 username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:36:24Z] Handling API request                          ip="192.168.1.195:60904" method=GET protocol=tls url=/1.0/storage-pools/storage/volumes/custom/opencloudData username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:36:24Z] Handling API request                          ip="192.168.1.195:60907" method=GET protocol=tls url=/1.0/events username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:36:24Z] Event listener server handler started         id=e081eb1d-79b8-4565-bb5d-a64900e224b4 local="192.168.1.13:8443" remote="192.168.1.195:60907"
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:36:24Z] Handling API request                          ip="192.168.1.195:60908" method=POST protocol=tls url=/1.0/storage-pools/storage/volumes/custom/opencloudData username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:36:24Z] New operation                                 class=task description="Migrating storage volume" operation=91699d6d-08e3-4d6a-acff-c52330963727 project=default
DEBUG  [2026-01-09T09:36:24Z] Started operation                             class=task description="Migrating storage volume" operation=91699d6d-08e3-4d6a-acff-c52330963727 project=default
INFO   [2026-01-09T09:36:24Z] ID: 91699d6d-08e3-4d6a-acff-c52330963727, Class: task, Description: Migrating storage volume  CreatedAt="2026-01-09 09:36:24.251861069 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[storage_volumes:[/1.0/storage-pools/storage/volumes/custom/opencloudData]]" Status=Running StatusCode=Running UpdatedAt="2026-01-09 09:36:24.251861069 +0000 UTC"
INFO   [2026-01-09T09:36:24Z] ID: 91699d6d-08e3-4d6a-acff-c52330963727, Class: task, Description: Migrating storage volume  CreatedAt="2026-01-09 09:36:24.251861069 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[storage_volumes:[/1.0/storage-pools/storage/volumes/custom/opencloudData]]" Status=Pending StatusCode=Pending UpdatedAt="2026-01-09 09:36:24.251861069 +0000 UTC"
INFO   [2026-01-09T09:36:24Z] Waiting for migration connections on source   pool=storage project=default push=true volume=opencloudData
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
DEBUG  [2026-01-09T09:36:24Z] Handling API request                          ip="192.168.1.195:60910" method=GET protocol=tls url=/1.0/operations/91699d6d-08e3-4d6a-acff-c52330963727 username=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40
DEBUG  [2026-01-09T09:36:24Z] Matched trusted cert                          fingerprint=cf7a1693c00831f2d03f670bc7cc7fdfd5f3b4f29a4587a2254199217cec4c40 subject="CN=robin@rh-mbp-2.local,O=Linux Containers"
INFO   [2026-01-09T09:36:24Z] Migration channels connected on source        pool=storage project=default push=true volume=opencloudData
DEBUG  [2026-01-09T09:36:24Z] MigrateCustomVolume started                   args="&{IndexHeaderVersion:1 Name:opencloudData Snapshots:[] MigrationType:{FSType:ZFS Features:[migration_header compress header_zvol_filesystems]} TrackProgress:true MultiSync:false FinalSync:false Data:<nil> ContentType:filesystem AllowInconsistent:false Refresh:true Info:0xc000cb2038 VolumeOnly:false ClusterMove:false StorageMove:false}" driver=zfs pool=storage project=default volName=opencloudData
DEBUG  [2026-01-09T09:36:24Z] Websocket: Sending barrier message            address="192.168.1.186:8443"
DEBUG  [2026-01-09T09:36:24Z] Sent migration index header, waiting for response  args="&{IndexHeaderVersion:1 Name:opencloudData Snapshots:[] MigrationType:{FSType:ZFS Features:[migration_header compress header_zvol_filesystems]} TrackProgress:true MultiSync:false FinalSync:false Data:<nil> ContentType:filesystem AllowInconsistent:false Refresh:true Info:0xc000cb2038 VolumeOnly:false ClusterMove:false StorageMove:false}" driver=zfs pool=storage project=default version=1 volName=opencloudData
DEBUG  [2026-01-09T09:36:24Z] MigrateCustomVolume finished                  args="&{IndexHeaderVersion:1 Name:opencloudData Snapshots:[] MigrationType:{FSType:ZFS Features:[migration_header compress header_zvol_filesystems]} TrackProgress:true MultiSync:false FinalSync:false Data:<nil> ContentType:filesystem AllowInconsistent:false Refresh:true Info:0xc000cb2038 VolumeOnly:false ClusterMove:false StorageMove:false}" driver=zfs pool=storage project=default volName=opencloudData
DEBUG  [2026-01-09T09:36:24Z] Failure for operation                         class=task description="Migrating storage volume" err="Failed reading migration index header: websocket: close 1006 (abnormal closure): unexpected EOF" operation=91699d6d-08e3-4d6a-acff-c52330963727 project=default
INFO   [2026-01-09T09:36:24Z] Migration channels disconnected on source     pool=storage project=default push=true volume=opencloudData
INFO   [2026-01-09T09:36:24Z] ID: 91699d6d-08e3-4d6a-acff-c52330963727, Class: task, Description: Migrating storage volume  CreatedAt="2026-01-09 09:36:24.251861069 +0000 UTC" Err="Failed reading migration index header: websocket: close 1006 (abnormal closure): unexpected EOF" Location=none MayCancel=false Metadata="map[]" Resources="map[storage_volumes:[/1.0/storage-pools/storage/volumes/custom/opencloudData]]" Status=Failure StatusCode=Failure UpdatedAt="2026-01-09 09:36:24.251861069 +0000 UTC"
DEBUG  [2026-01-09T09:36:24Z] Event listener server handler stopped         listener=e081eb1d-79b8-4565-bb5d-a64900e224b4 local="192.168.1.13:8443" remote="192.168.1.195:60907"

I could try another w network switch, but I never had any problems with the one I am using. I do not think that this is the problem. Both IncusOS Server are on the same subnet.

I copied the volume once again:

storage volume copy Server:storage/opencloudData Backup:storage/opencloudData --mode push --debug
Storage volume copied successfully!

and tried again to refresh to see the monitor log:

storage volume copy rhServer:storage/opencloudData rhBackup:storage/opencloudData --mode push -v --debug --refresh

Error: Failed storage volume creation: zfs send failed: signal: broken pipe ()

The monitor output from the backup server:

DEBUG  [2026-01-09T10:46:06Z] CreateCustomVolumeFromMigration finished      args="{IndexHeaderVersion:1 Name:opencloudData Description: Config:map[size:500GiB volatile.idmap.last:[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}] volatile.idmap.next:[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]] Snapshots:[] MigrationType:{FSType:ZFS Features:[migration_header compress header_zvol_filesystems]} TrackProgress:true Refresh:true RefreshExcludeOlder:false Live:false VolumeSize:0 ContentType:filesystem VolumeOnly:false ClusterMoveSourceName: StoragePool:}" driver=zfs pool=storage project=default volName=opencloudData
INFO   [2026-01-09T10:46:06Z] Migration channels disconnected on target     pool=storage project=default push=true volume=opencloudData
INFO   [2026-01-09T10:46:06Z] ID: fd14a9f7-2a5d-4de5-85e1-65ea7b69e8a9, Class: websocket, Description: Creating storage volume  CreatedAt="2026-01-09 10:46:04.94093398 +0000 UTC" Err="Error transferring storage volume: Failed receiving volume \"default_opencloudData\": Failed to run: zfs receive -x mountpoint -F -u storagePool/storage/custom/default_opencloudData: exit status 1 (cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one)" Location=none MayCancel=false Metadata="map[control:d82be3c155563e8bd01b40819ab76beeccd54797ee8975e7a373b6cc7c155681 fs:6e7474626387f6c5abb47eff2290d2e16a7a93e88706565f7b3f43b7badee228]" Resources="map[storage_volumes:[/1.0/storage-pools/storage/volumes/custom/opencloudData]]" Status=Failure StatusCode=Failure UpdatedAt="2026-01-09 10:46:04.94093398 +0000 UTC"
ERROR  [2026-01-09T10:46:06Z] Error during migration sink                   err="Failed receiving volume \"default_opencloudData\": Failed to run: zfs receive -x mountpoint -F -u storagePool/storage/custom/default_opencloudData: exit status 1 (cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one)"
DEBUG  [2026-01-09T10:46:06Z] Failure for operation                         class=websocket description="Creating storage volume" err="Error transferring storage volume: Failed receiving volume \"default_opencloudData\": Failed to run: zfs receive -x mountpoint -F -u storagePool/storage/custom/default_opencloudData: exit status 1 (cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one)" operation=fd14a9f7-2a5d-4de5-85e1-65ea7b69e8a9 project=default

I think that after the initial copy, refresh aborts due to a zfs error and then something breaks. Subsequently, there is an error that the volume is only available in the database, but not on the storage. I have noticed that as soon as this error message appears, the size of the opencloudData volume is no longer displayed in the UI.

Therefore, I believe the following error message to be decisive:

Failed to run: zfs receive -x mountpoint -F -u storagePool/storage/custom/default_opencloudData: exit status 1 (cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one

Do you think a fresh install, with a new creation of the backup storage Pool could help?

Any ideas or recommendations? Should I write a github issue?

Now, it works as expected with IncusOS version: 202601260318 and the corresponding incus version.

I updated both Server and the Backup-Server.