Using bin.linux.incus-migrate.x86_64 is not working completily

Hi

Trying to migrate a debian strech client ( based on a KVM) to incus 6.19.

A already install using bin.linux.incus-migrate.x86_64 at denian strech.

at incus I create a profile named hercules1:

config:
boot.autorestart: “yes”
boot.autostart: “yes”
limits.cpu: “2”
limits.memory: 2GiB
security.privileged: “yes”
snapshots.expiry: 1d
snapshots.schedule: 5 0 * * *
description: “”
devices:
lan:
nictype: bridged
parent: lan
type: nic
root:
path: /
pool: hercules1
raw.mount.options: usrquota,grpquota,acl,user_xattr
type: disk
name: hercules1
used_by:

  • /1.0/instances/hercules1
    project: default

and a storage named hercules1 too:

config:
source: /srv/containers/hercules1
description: “”
name: hercules1
driver: dir
used_by:

  • /1.0/instances/hercules1
  • /1.0/profiles/hercules1
    status: Created
    locations:
  • none

So far so good.

starting migration:

Instance to be created:
Name: hercules1
Project: default
Type: container
Source: /
Profiles:

  • hercules1
    Storage pool: hercules1

Additional overrides can be applied at this stage:

  1. Begin the migration with the above configuration
  2. Override profile list
  3. Set additional configuration options
  4. Change instance storage pool or volume size
  5. Change instance network
  6. Add additional disk
  7. Change additional disk storage pool

Please pick one of the options above [default=1]: 1
Instance hercules1 successfully created

As I can see it create containers hercules1 but with only 561M and prompt return me after 1 minute… almost impossible to rsync almost 500G …

What Am I missing?

Where can I see errors using bin.linux.incus-migrate.x86_64?

or is there another way to migrate?

Btw hercules1 is samba server…80)

thanks

So you say it returns to the prompt after 1 minute.
Does it show any error at that point?

Can you try running incus monitor --pretty in another terminal while this is going on?
My guess is that there’s something on the filesystem that’s making rsync upset, but I’d have expected that to lead to an error…

Hi Graber

additional infos:

/dev/mapper/containers-hercules1   492G  561M  466G   1% /srv/containers/hercules1
**/srv/containers/hercules1**# ls -l 
total 32 
drwx--x--x 2 root root 4096 dez  2 14:09 **buckets** 
drwx--x--x 3 root root 4096 dez  3 08:56 **containers** 
drwx--x--x 2 root root 4096 dez  2 14:09 **containers-snapshots** 
drwx------ 2 root root 4096 dez  2 14:09 **custom** 
drwx------ 2 root root 4096 dez  2 14:09 **custom-snapshots** 
drwx------ 2 root root 4096 dez  2 14:09 **images** 
drwx------ 2 root root 4096 dez  2 14:09 **virtual-machines** 
drwx------ 2 root root 4096 dez  2 14:09 **virtual-machines-snapshots**
**root@cerberus**:**/srv/containers/hercules1**# du -hsm \* 
1       buckets 
561     containers 
1       containers-snapshots 
1       custom 
1       custom-snapshots 
1       images 
1       virtual-machines 
1       virtual-machines-snapshots 
**root@cerberus**:**/srv/containers/hercules1**# cd containers/hercules1/rootfs/ 
bin/        dev/        home/       lib64/      media/      opt/        root/       sbin/       sys/        usr/         
boot/       etc/        lib/        lost+found/ mnt/        proc/       run/        srv/        tmp/        var/   
**root@cerberus**:**/srv/containers/hercules1**# incus list 
+-----------+---------+-----------------------+------+-----------+-----------+ 
|   NAME    |  STATE  |         IPV4          | IPV6 |   TYPE    | SNAPSHOTS | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| base      | STOPPED |                       |      | CONTAINER | 0         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| hercules  | RUNNING | 192.168.100.200 (lan) |      | CONTAINER | 1         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| hercules1 | STOPPED |                       |      | CONTAINER | 0         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| jetsoft   | RUNNING | 192.168.100.5 (lan)   |      | CONTAINER | 1         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| painel    | RUNNING | 192.168.100.3 (lan)   |      | CONTAINER | 1         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
**root@cerberus**:**/srv/containers/hercules1**# incus start hercules1 
**root@cerberus**:**/srv/containers/hercules1**# incus list 
+-----------+---------+-----------------------+------+-----------+-----------+ 
|   NAME    |  STATE  |         IPV4          | IPV6 |   TYPE    | SNAPSHOTS | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| base      | STOPPED |                       |      | CONTAINER | 0         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| hercules  | RUNNING | 192.168.100.200 (lan) |      | CONTAINER | 1         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| hercules1 | STOPPED |                       |      | CONTAINER | 0         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| jetsoft   | RUNNING | 192.168.100.5 (lan)   |      | CONTAINER | 1         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
| painel    | RUNNING | 192.168.100.3 (lan)   |      | CONTAINER | 1         | 
+-----------+---------+-----------------------+------+-----------+-----------+ 
**root@cerberus**:**/srv/containers/hercules1**# incus shell hercules1 
Error while executing alias expansion: incus exec hercules1 -- su -l 
Error: Instance is not running

humm tested 3 times and same error. Here is part of incus monitor that you asked:

time="2025-12-03T08:56:31-03:00" level=debug msg="Updated metadata for operation" class=websocket description="Creating instance" operation=8febefff-cff8-4226-a9b0-cf2c45584186 project=default 
time="2025-12-03T08:56:31-03:00" level=info msg="ID: 8febefff-cff8-4226-a9b0-cf2c45584186, Class: websocket, Description: Creating instance" CreatedAt="2025-12-03 08:56:01.991523332 -0300 -03" Err= Location=none MayCancel=false Metadata="map\[control:092a81882c6b5b9f72b6bcc49251b1a254a6f564c3ae61c87d2f13ce4b2a4c8e fs:4e574f9e00b4c0bfb4d16fe63e41cdda5c9c13f56c461be6aa3936c0d66d7
d71 fs_progress:hercules1: 196.87MB (6.85MB/s)\]" Resources="map\[instances:\[/1.0/instances/hercules1\]\]" Status=Running StatusCode=Running UpdatedAt="2025-12-03 08:56:31.011328362 -0300 -03" 
time="2025-12-03T08:56:32-03:00" level=debug msg="Updated metadata for operation" class=websocket description="Creating instance" operation=8febefff-cff8-4226-a9b0-cf2c45584186 project=default 
time="2025-12-03T08:56:32-03:00" level=info msg="ID: 8febefff-cff8-4226-a9b0-cf2c45584186, Class: websocket, Description: Creating instance" CreatedAt="2025-12-03 08:56:01.991523332 -0300 -03" Err= Location=none MayCancel=false Metadata="map\[control:092a81882c6b5b9f72b6bcc49251b1a254a6f564c3ae61c87d2f13ce4b2a4c8e fs:4e574f9e00b4c0bfb4d16fe63e41cdda5c9c13f56c461be6aa3936c0d66d7
d71 fs_progress:hercules1: 203.16MB (6.83MB/s)\]" Resources="map\[instances:\[/1.0/instances/hercules1\]\]" Status=Running StatusCode=Running UpdatedAt="2025-12-03 08:56:32.018170241 -0300 -03" 
time="2025-12-03T08:56:33-03:00" level=debug msg="Updated metadata for operation" class=websocket description="Creating instance" operation=8febefff-cff8-4226-a9b0-cf2c45584186 project=default 
time="2025-12-03T08:56:33-03:00" level=info msg="ID: 8febefff-cff8-4226-a9b0-cf2c45584186, Class: websocket, Description: Creating instance" CreatedAt="2025-12-03 08:56:01.991523332 -0300 -03" Err= Location=none MayCancel=false Metadata="map\[control:092a81882c6b5b9f72b6bcc49251b1a254a6f564c3ae61c87d2f13ce4b2a4c8e fs:4e574f9e00b4c0bfb4d16fe63e41cdda5c9c13f56c461be6aa3936c0d66d7
d71 fs_progress:hercules1: 206.69MB (6.72MB/s)\]" Resources="map\[instances:\[/1.0/instances/hercules1\]\]" Status=Running StatusCode=Running UpdatedAt="2025-12-03 08:56:33.048911324 -0300 -03" 
time="2025-12-03T08:56:34-03:00" level=info msg="ID: 8febefff-cff8-4226-a9b0-cf2c45584186, Class: websocket, Description: Creating instance" CreatedAt="2025-12-03 08:56:01.991523332 -0300 -03" Err= Location=none MayCancel=false Metadata="map\[control:092a81882c6b5b9f72b6bcc49251b1a254a6f564c3ae61c87d2f13ce4b2a4c8e fs:4e574f9e00b4c0bfb4d16fe63e41cdda5c9c13f56c461be6aa3936c0d66d7
d71 fs_progress:hercules1: 214.89MB (6.76MB/s)\]" Resources="map\[instances:\[/1.0/instances/hercules1\]\]" Status=Running StatusCode=Running UpdatedAt="2025-12-03 08:56:34.051639799 -0300 -03" 
time="2025-12-03T08:56:34-03:00" level=debug msg="Updated metadata for operation" class=websocket description="Creating instance" operation=8febefff-cff8-4226-a9b0-cf2c45584186 project=default 
time="2025-12-03T08:56:35-03:00" level=debug msg="Updated metadata for operation" class=websocket description="Creating instance" operation=8febefff-cff8-4226-a9b0-cf2c45584186 project=default 
time="2025-12-03T08:56:35-03:00" level=info msg="ID: 8febefff-cff8-4226-a9b0-cf2c45584186, Class: websocket, Description: Creating instance" CreatedAt="2025-12-03 08:56:01.991523332 -0300 -03" Err= Location=none MayCancel=false Metadata="map\[control:092a81882c6b5b9f72b6bcc49251b1a254a6f564c3ae61c87d2f13ce4b2a4c8e fs:4e574f9e00b4c0bfb4d16fe63e41cdda5c9c13f56c461be6aa3936c0d66d7
d71 fs_progress:hercules1: 218.61MB (6.66MB/s)\]" Resources="map\[instances:\[/1.0/instances/hercules1\]\]" Status=Running StatusCode=Running UpdatedAt="2025-12-03 08:56:35.094419719 -0300 -03" 
time="2025-12-03T08:56:36-03:00" level=debug msg="Websocket: Sending barrier message" address="192.168.100.1:54600" 
time="2025-12-03T08:56:36-03:00" level=debug msg="Websocket: Got barrier message" address="192.168.100.1:54600" 
time="2025-12-03T08:56:36-03:00" level=debug msg="Receiving filesystem volume stopped" driver=dir path=/var/lib/incus/storage-pools/hercules1/containers/hercules1/ pool=hercules1 volName=hercules1 
time="2025-12-03T08:56:36-03:00" level=debug msg="CreateInstanceFromMigration finished" args="{IndexHeaderVersion:0 Name:hercules1 Description: Config:map\[\] Snapshots:\[\] MigrationType:{FSType:RSYNC Features:\[xattrs delete compress\]} TrackProgress:true Refresh:false RefreshExcludeOlder:false Live:false VolumeSize:0 ContentType: VolumeOnly:false ClusterMoveSourceName: StoragePoo
l:hercules1}" driver=dir instance=hercules1 pool=hercules1 project=default 
time="2025-12-03T08:56:36-03:00" level=debug msg="Sending migration success response to source" instance=hercules1 instanceType=container project=default success=true 
time="2025-12-03T08:56:36-03:00" level=debug msg="Migrate receive filesystem transfer finished" instance=hercules1 instanceType=container project=default 
time="2025-12-03T08:56:36-03:00" level=debug msg="Matched trusted cert" fingerprint=a6aa19aa50f7e049475d3c7bd9bf5e8ee80c30f3f8f8fa2c715e95e57f0c1b7c subject="CN=root@hercules,O=Linux Containers" 
time="2025-12-03T08:56:36-03:00" level=debug msg="Handling API request" ip="192.168.100.1:54604" method=DELETE protocol=tls url=/1.0/certificates/a6aa19aa50f7e049475d3c7bd9bf5e8ee80c30f3f8f8fa2c715e95e57f0c1b7c username=a6aa19aa50f7e049475d3c7bd9bf5e8ee80c30f3f8f8fa2c715e95e57f0c1b7c 
time="2025-12-03T08:56:36-03:00" level=debug msg="Matched trusted cert" fingerprint=a6aa19aa50f7e049475d3c7bd9bf5e8ee80c30f3f8f8fa2c715e95e57f0c1b7c subject="CN=root@hercules,O=Linux Containers" 
time="2025-12-03T08:56:36-03:00" level=debug msg="Refreshing trusted certificate cache" 
time="2025-12-03T08:56:36-03:00" level=info msg="Action: certificate-deleted, Source: /1.0/certificates/a6aa19aa50f7e049475d3c7bd9bf5e8ee80c30f3f8f8fa2c715e95e57f0c1b7c, Requestor: tls/a6aa19aa50f7e049475d3c7bd9bf5e8ee80c30f3f8f8fa2c715e95e57f0c1b7c (192.168.100.1)" 
time="2025-12-03T08:56:36-03:00" level=debug msg="Migration receive stopped" instance=hercules1 instanceType=container project=default 
time="2025-12-03T08:56:36-03:00" level=debug msg="Event listener server handler stopped" listener=5706bb3b-ee24-451f-8e37-1269ea56fb7c local="192.168.100.2:8443" remote="192.168.100.1:54592" 
time="2025-12-03T08:56:36-03:00" level=debug msg="Migrate receive control monitor finished" instance=hercules1 instanceType=container project=default 
time="2025-12-03T08:56:36-03:00" level=debug msg="Instance operation lock finished" action=create err="<nil>" instance=hercules1 project=default reusable=false 
time="2025-12-03T08:56:36-03:00" level=debug msg="Migration channels disconnected on target" clusterMoveSourceName= instance=hercules1 live=false project=default push=true 
time="2025-12-03T08:56:36-03:00" level=debug msg="Success for operation" class=websocket description="Creating instance" operation=8febefff-cff8-4226-a9b0-cf2c45584186 project=default 
time="2025-12-03T08:56:36-03:00" level=info msg="ID: 8febefff-cff8-4226-a9b0-cf2c45584186, Class: websocket, Description: Creating instance" CreatedAt="2025-12-03 08:56:01.991523332 -0300 -03" Err= Location=none MayCancel=false Metadata="map\[control:092a81882c6b5b9f72b6bcc49251b1a254a6f564c3ae61c87d2f13ce4b2a4c8e fs:4e574f9e00b4c0bfb4d16fe63e41cdda5c9c13f56c461be6aa3936c0d66d7
d71 fs_progress:hercules1: 218.61MB (6.66MB/s)\]" Resources="map\[instances:\[/1.0/instances/hercules1\]\]" Status=Success StatusCode=Success UpdatedAt="2025-12-03 08:56:35.094419719 -0300 -03"

Let me know if I can do more tests.

Thanks for your time

Can you show incus console --show-log hercules1

Hi Graber..

root@cerberus:~# incus console --show-log hercules1

root@cerberus:~# cat /var/log/incus/
base/ hercules/ hercules1/ incusd.log incusd.log.1 jetsoft/ painel/

root@cerberus:~# ls -l /var/log/incus/hercules1/lxc.log
-rw-r----- 1 root root 0 dez 3 16:50 /var/log/incus/hercules1/lxc.log

root@cerberus:~# ls -l /srv/containers/hercules1/containers/hercules1/rootfs/
total 88
drwxr-xr-x 2 root root 4096 set 8 2019 bin
drwxr-xr-x 3 root root 4096 fev 24 2020 boot
lrwxrwxrwx 1 root root 17 abr 22 2016 dados/srv/samba/dados/
drwxr-xr-x 4 root root 4096 fev 19 2016 dev
drwxr-x–x 138 root root 12288 dez 3 13:47 etc
drwxr-xr-x 2 root root 4096 fev 19 2016 home
lrwxrwxrwx 1 root root 30 fev 11 2020 initrd.img → boot/initrd.img-4.9.0-12-amd64
lrwxrwxrwx 1 root root 30 fev 11 2020 initrd.img.old → boot/initrd.img-4.9.0-11-amd64
drwxr-xr-x 17 root root 4096 jul 2 2019 lib
drwxr-xr-x 2 root root 4096 fev 25 2019 lib64
drwx------ 2 root root 4096 fev 19 2016 lost+found
drwxr-xr-x 3 root root 4096 fev 19 2016 media
drwxr-xr-x 2 root root 4096 fev 19 2016 mnt
drwxr-xr-x 2 root root 4096 fev 19 2016 opt
drwxr-xr-x 2 root root 4096 ago 26 2015 proc
drwx------ 8 root root 4096 dez 3 13:47 root
drwxr-xr-x 2 root root 4096 fev 19 2016 run
drwxr-xr-x 2 root root 4096 fev 8 2020 sbin
drwxr-xr-x 4 root root 4096 fev 19 2016 srv
drwxr-xr-x 2 root root 4096 abr 6 2015 sys
drwxr-xr-x 2 root root 4096 fev 19 2016 tmp
drwxr-xr-x 2 root root 4096 fev 19 2016 usr
drwxr-xr-x 3 root root 4096 jul 15 2018 var
lrwxrwxrwx 1 root root 27 fev 11 2020 vmlinuz → boot/vmlinuz-4.9.0-12-amd64
lrwxrwxrwx 1 root root 27 fev 11 2020 vmlinuz.old → boot/vmlinuz-4.9.0-11-amd64

and nothing below usr at hercules1…

root@cerberus:~# ls -lR /srv/containers/hercules1/containers/hercules1/rootfs/usr/
/srv/containers/hercules1/containers/hercules1/rootfs/usr/:
total 0

best regards

Can you show cat /proc/self/mountinfo on the source system?

It looks a lot like your source system is using multiple mounts and you didn’t pass those to incus-migrate, leading to a partial filesystem on the target.

humm here:

root@cerberus:~# cat /proc/self/mountinfo
25 31 0:23 / /sys rw,nosuid,nodev,noexec,relatime shared:8 - sysfs sysfs rw
26 31 0:24 / /proc rw,nosuid,nodev,noexec,relatime shared:14 - proc proc rw
27 31 0:5 / /dev rw,nosuid,relatime shared:3 - devtmpfs udev rw,size=8055088k,nr_inodes=2013772,mode=755,inode64
28 27 0:25 / /dev/pts rw,nosuid,noexec,relatime shared:4 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
29 31 0:26 / /run rw,nosuid,nodev,noexec,relatime shared:6 - tmpfs tmpfs rw,size=1619884k,mode=755,inode64
30 25 0:27 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:9 - efivarfs efivarfs rw
31 1 252:6 / / rw,relatime shared:1 - ext4 /dev/mapper/zeus-barra rw
32 31 252:7 / /usr rw,relatime shared:2 - ext4 /dev/disk/by-id/dm-uuid-LVM-d7dBIOTvxfOX7vpnhNDWD4nUhJIRxTaq49e1vQ0G5zWshggZeqchdIVJu4Dm
iLKb rw
33 25 0:6 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:10 - securityfs securityfs rw
34 27 0:28 / /dev/shm rw,nosuid,nodev shared:5 - tmpfs tmpfs rw,inode64
35 29 0:29 / /run/lock rw,nosuid,nodev,noexec,relatime shared:7 - tmpfs tmpfs rw,size=5120k,inode64
36 25 0:30 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime shared:11 - cgroup2 cgroup2 rw
37 25 0:31 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:12 - pstore pstore rw
38 25 0:32 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:13 - bpf bpf rw,mode=700
39 26 0:33 / /proc/sys/fs/binfmt_misc rw,relatime shared:15 - autofs systemd-1 rw,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,p
ipe_ino=7268
40 27 0:20 / /dev/mqueue rw,nosuid,nodev,noexec,relatime shared:16 - mqueue mqueue rw
41 27 0:34 / /dev/hugepages rw,nosuid,nodev,relatime shared:17 - hugetlbfs hugetlbfs rw,pagesize=2M
42 25 0:7 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:18 - debugfs debugfs rw
43 25 0:12 / /sys/kernel/tracing rw,nosuid,nodev,noexec,relatime shared:19 - tracefs tracefs rw
45 25 0:35 / /sys/fs/fuse/connections rw,nosuid,nodev,noexec,relatime shared:30 - fusectl fusectl rw
47 25 0:21 / /sys/kernel/config rw,nosuid,nodev,noexec,relatime shared:46 - configfs configfs rw
44 31 252:16 / /srv/html rw,relatime shared:20 - ext4 /dev/mapper/zeus-srvhtml rw
48 31 252:8 / /home rw,relatime shared:22 - ext4 /dev/mapper/zeus-home rw
51 31 252:9 / /srv/backup rw,relatime shared:24 - ext4 /dev/mapper/zeus-srvbackup rw
53 31 252:13 / /tmp rw,relatime shared:26 - ext4 /dev/mapper/zeus-tmp rw
55 51 252:0 / /srv/backup/backupcontainers rw,relatime shared:28 - ext4 /dev/mapper/dados-srvbackupbackupcontainers rw
57 31 252:10 / /var rw,relatime shared:31 - ext4 /dev/mapper/zeus-var rw
59 57 252:11 / /var/lib rw,relatime shared:33 - ext4 /dev/mapper/zeus-varlib rw
61 57 252:12 / /var/log rw,relatime shared:35 - ext4 /dev/mapper/zeus-varlog rw
63 59 252:14 / /var/lib/aide rw,relatime shared:37 - ext4 /dev/mapper/zeus-varlibaide rw
65 31 252:2 / /srv/containers/painel rw,relatime shared:39 - ext4 /dev/mapper/containers-painel rw
67 59 252:17 / /var/lib/ntopng rw,relatime shared:41 - ext4 /dev/mapper/zeus-varlibntopng rw
69 31 252:1 / /srv/containers/base rw,relatime shared:43 - ext4 /dev/mapper/containers-base rw
71 61 252:15 / /var/log/apache2 rw,relatime shared:45 - ext4 /dev/mapper/zeus-varlogapache2 rw
73 31 8:2 / /boot rw,relatime shared:58 - ext4 /dev/sda2 rw
75 73 8:1 / /boot/efi rw,relatime shared:74 - vfat /dev/sda1 rw,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,
errors=remount-ro
77 31 252:4 / /srv/containers/hercules rw,relatime shared:76 - ext4 /dev/mapper/containers-hercules rw
79 31 252:3 / /srv/containers/jetsoft rw,relatime shared:78 - ext4 /dev/mapper/containers-jetsoft rw
81 31 252:5 / /srv/containers/hercules1 rw,relatime shared:80 - ext4 /dev/mapper/containers-hercules1 rw
83 39 0:36 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime shared:82 - binfmt_misc binfmt_misc rw
89 59 0:42 / /var/lib/incus-lxcfs rw,nosuid,nodev,relatime shared:246 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other
87 59 0:41 / /var/lib/incus/devices rw,relatime shared:136 - tmpfs tmpfs rw,size=51200k,mode=711,inode64
139 59 0:55 / /var/lib/incus/shmounts rw,relatime shared:205 - tmpfs tmpfs rw,size=100k,mode=711,inode64
261 59 0:56 / /var/lib/incus/guestapi rw,relatime shared:219 - tmpfs tmpfs rw,size=100k,mode=755,inode64
275 59 252:2 / /var/lib/incus/storage-pools/painel rw,relatime shared:39 - ext4 /dev/mapper/containers-painel rw
289 59 252:4 / /var/lib/incus/storage-pools/hercules rw,relatime shared:76 - ext4 /dev/mapper/containers-hercules rw
303 59 252:3 / /var/lib/incus/storage-pools/jetsoft rw,relatime shared:78 - ext4 /dev/mapper/containers-jetsoft rw
362 59 252:1 / /var/lib/incus/storage-pools/base rw,relatime shared:43 - ext4 /dev/mapper/containers-base rw
879 59 252:5 / /var/lib/incus/storage-pools/hercules1 rw,relatime shared:80 - ext4 /dev/mapper/containers-hercules1 rw
895 59 252:0 / /var/lib/incus/storage-pools/backupcontainers rw,relatime shared:28 - ext4 /dev/mapper/dados-srvbackupbackupcontainers r
w
141 29 0:83 / /run/user/1000 rw,nosuid,nodev,relatime shared:285 - tmpfs tmpfs rw,size=1619884k,nr_inodes=404971,mode=700,uid=1000,gid=
1000,inode64

just remembering that I am using bin.linux.incus-migrate.x86_64 at debian strech

Sorry, I need the output on the source server, not the target.

Hi Graber

Here from source samba server/ debian stretch:

root@hercules:~# cat /proc/self/mountinfo
16 22 0:16 / /sys rw,nosuid,nodev,noexec,relatime shared:8 - sysfs sysfs rw
17 22 0:4 / /proc rw,nosuid,nodev,noexec,relatime shared:13 - proc proc rw
18 22 0:6 / /dev rw,nosuid,relatime shared:3 - devtmpfs udev rw,size=1009028k,nr_inodes=252257,mode=755
19 18 0:17 / /dev/pts rw,nosuid,noexec,relatime shared:4 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
20 22 0:18 / /run rw,nosuid,noexec,relatime shared:6 - tmpfs tmpfs rw,size=205216k,mode=755
22 0 253:0 / / rw,relatime shared:1 - ext4 /dev/mapper/zeus-barra rw,errors=remount-ro,data=ordered
23 22 253:2 / /usr rw,relatime shared:2 - ext4 /dev/mapper/zeus-usr rw,data=ordered
24 16 0:15 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:9 - securityfs securityfs rw
25 18 0:20 / /dev/shm rw,nosuid,nodev shared:5 - tmpfs tmpfs rw
26 20 0:21 / /run/lock rw,nosuid,nodev,noexec,relatime shared:7 - tmpfs tmpfs rw,size=5120k
27 16 0:22 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:10 - tmpfs tmpfs ro,mode=755
28 27 0:23 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,xattr,release_agent=/lib/systemd/syste
md-cgroups-agent,name=systemd
29 16 0:24 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:12 - pstore pstore rw
30 27 0:25 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,net_cls,net_prio
31 27 0:26 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,memory
32 27 0:27 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,freezer
33 27 0:28 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,devices
34 27 0:29 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,perf_event
35 27 0:30 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:19 - cgroup cgroup rw,cpu,cpuacct
36 27 0:31 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:20 - cgroup cgroup rw,cpuset
37 27 0:32 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:21 - cgroup cgroup rw,blkio
38 27 0:33 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:22 - cgroup cgroup rw,pids
39 17 0:34 / /proc/sys/fs/binfmt_misc rw,relatime shared:23 - autofs systemd-1 rw,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,p
ipe_ino=10937
40 16 0:7 / /sys/kernel/debug rw,relatime shared:24 - debugfs debugfs rw
41 18 0:14 / /dev/mqueue rw,relatime shared:25 - mqueue mqueue rw
42 18 0:35 / /dev/hugepages rw,relatime shared:26 - hugetlbfs hugetlbfs rw
43 20 0:36 / /run/rpc_pipefs rw,relatime shared:27 - rpc_pipefs sunrpc rw
76 22 253:6 / /home rw,relatime shared:28 - ext4 /dev/mapper/zeus-home rw,data=ordered
74 22 253:3 / /var rw,relatime shared:29 - ext4 /dev/mapper/zeus-var rw,data=ordered
73 22 253:4 / /srv/backup rw,relatime shared:30 - ext4 /dev/mapper/zeus-srvbackup rw,data=ordered
75 22 253:1 / /tmp rw,relatime shared:31 - ext4 /dev/mapper/zeus-tmp rw,data=ordered
82 74 253:7 / /var/www rw,relatime shared:32 - ext4 /dev/mapper/zeus-varwww rw,data=ordered
81 74 253:12 / /var/log/smb_audit rw,relatime shared:33 - ext4 /dev/mapper/zeus-varlogsmb_audit rw,data=ordered
85 74 253:5 / /var/lib rw,relatime shared:34 - ext4 /dev/mapper/zeus-varlib rw,data=ordered
84 74 253:11 / /var/log/samba rw,relatime shared:35 - ext4 /dev/mapper/zeus-varlogsamba rw,data=ordered
83 74 253:15 / /var/log/apache2 rw,relatime shared:36 - ext4 /dev/mapper/zeus-varlogapache2 rw,data=ordered
91 85 253:8 / /var/lib/mysql rw,relatime shared:37 - ext4 /dev/mapper/zeus-varlibmysql rw,data=ordered
90 85 253:13 / /var/lib/postgresql rw,relatime shared:38 - ext4 /dev/mapper/zeus-varlibpostgresql rw,data=ordered
92 85 253:14 / /var/lib/aide rw,relatime shared:39 - ext4 /dev/mapper/zeus-varlibaide rw,data=ordered
97 22 253:10 / /srv/samba rw,relatime shared:40 - ext4 /dev/mapper/zeus-srvsamba rw,quota,usrquota,grpquota,data=ordered
242 39 0:40 / /proc/sys/fs/binfmt_misc rw,relatime shared:181 - binfmt_misc binfmt_misc rw
247 20 0:41 / /run/user/1000 rw,nosuid,nodev,relatime shared:185 - tmpfs tmpfs rw,size=205212k,mode=700,uid=1000,gid=1000
root@hercules:~#

regards

Right, so that system seems to at least be using:

  • /
  • /usr
  • /home
  • /var
  • /srv/backup
  • /var/www
  • /var/log/smb_audit
  • /var/lib
  • /var/log/samba
  • /var/log/apache2
  • /var/lib/mysql
  • /var/lib/postgresql
  • /var/lib/aide
  • /srv/samba

All are separate mounts, so all must be provided as such when running incus-migrate or you’re going to end up with that data missing on the target.

thanks Graber. I think that you solve my problem. For future rfeferences see below ( I do not migrate logs ..80)

Please enter the number of your choice: 1
Name of the new instance: hercules1
Please provide the path to a root filesystem: /
Do you want to add additional filesystem mounts? [default=no]: yes
Please provide a path the filesystem mount path [empty value to continue]: /usr
Please provide a path the filesystem mount path [empty value to continue]: /srv/backup
Please provide a path the filesystem mount path [empty value to continue]: /tmp
Please provide a path the filesystem mount path [empty value to continue]: /home
Please provide a path the filesystem mount path [empty value to continue]: /var
Please provide a path the filesystem mount path [empty value to continue]: /var/www
Please provide a path the filesystem mount path [empty value to continue]: /var/lib
Please provide a path the filesystem mount path [empty value to continue]: /var/lib/postgresql
Please provide a path the filesystem mount path [empty value to continue]: /var/lib/mysql
Please provide a path the filesystem mount path [empty value to continue]: /srv/samba
Please provide a path the filesystem mount path [empty value to continue]:

Instance to be created:
Name: hercules1
Project: default
Type: container
Source: /
Mounts:

  • /usr
  • /srv/backup
  • /tmp
  • /home
  • /var
  • /var/www
  • /var/lib
  • /var/lib/postgresql
  • /var/lib/mysql
  • /srv/samba

Now it appears another problem , but I’ll look for after it. Its a samba with special permissions…80)

Transferring instance: hercules1: 3.16GB (6.63MB/s)Error: Failed creating instance on target: Rsync receive failed: /var/lib/incus/stor
age-pools/hercules1/containers/hercules1/: [exit status 23] (rsync: [receiver] rsync_xal_set: lsetxattr(“/var/lib/incus/storage-pools/h
ercules1/containers/hercules1/rootfs/var/lib/samba/sysvol”,“security.NTACL”) failed: Operation not permitted (1)
rsync: [receiver] rsync_xal_set: lsetxattr(“/var/lib/incus/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fra
gmentadoras.com.br”,“security.NTACL”) failed: Operation not permitted (1)
rsync: [receiver] rsync_xal_set: lsetxattr(“/var/lib/incus/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fra
gmentadoras.com.br/Policies”,“security.NTACL”) failed: Operation not permitted (1)
rsync: [receiver] rsync_xal_set: lsetxattr(“/var/lib/incus/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fra
gmentadoras.com.br/Policies/{31B2F340-016D-11D2-945F-00C04FB984F9}”,“security.NTACL”) failed: Operation not permitted (1)
rsync: [receiver] rsync_xal_set: lsetxattr(“/var/lib/incus/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fra
gmentadoras.com.br/Policies/{31B2F340-016D-11D2-945F-00C04FB984F9}/.GPT.INI.EdNwfO”,“security.NTACL”) failed: Operation not permitted (
1)

thanks for all your help Graber.

humm at final it gives these errors:

s/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fragmentadoras.com.br/Policies/{6AC1786C-016F-11D2-945F-00C0
4FB984F9}\“,\“security.NTACL\”) failed: Operation not permitted (1)\nrsync: [receiver] rsync_xal_set: lsetxattr(\”/var/lib/incus/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fragmentadoras.com.br/Policies/{6AC1786C-016F-11D2-945F-00C04FB984F9
}/.GPT.INI.dhIHnP\“,\“security.NTACL\”) failed: Operation not permitted (1)\nrsync: [receiver] rsync_xal_set: lsetxattr(\”/var/lib/incu
s/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fragmentadoras.com.br/Policies/{6AC1786C-016F-11D2-945F-00C0
4FB984F9}/MACHINE\“,\“security.NTACL\”) failed: Operation not permitted (1)\nrsync: [receiver] rsync_xal_set: lsetxattr(\”/var/lib/incu
s/storage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fragmentadoras.com.br/Policies/{6AC1786C-016F-11D2-945F-00C0
4FB984F9}/USER\“,\“security.NTACL\”) failed: Operation not permitted (1)\nrsync: [receiver] rsync_xal_set: lsetxattr(\”/var/lib/incus/s
torage-pools/hercules1/containers/hercules1/rootfs/var/lib/samba/sysvol/fragmentadoras.com.br/scripts\“,\“security.NTACL\”) failed: Ope
ration not permitted (1)\nrsync: [receiver] rsync_xal_set: lsetxattr(\”/var/lib/incus/storage-pools/hercules1/containers/hercules1/root
fs/var/lib/samba/sysvol/fragmentadoras.com.br/scripts/.login.bat.uzxM8V\“,\“security.NTACL\”) failed: Operation not permitted (1)\n)” L
ocation=none MayCancel=false Metadata=“map[control:276d258384c081fddd0f053c7b6b8fe7c1677f79dc2c6fd481fa34f5f1aa3e8f fs:0ec7f5bde7e85e01
bf88308167fae36d8d478848e0ceba2974576f8aa0a79c07 fs_progress:hercules1: 201.02GB (21.53MB/s)]” Resources=“map[instances:[/1.0/instances
/hercules1]]” Status=Failure StatusCode=Failure UpdatedAt=“2025-12-04 12:03:03.757393123 -0300 -03”

humm tried to search something about this. I know that is related with xattr and rsync ,but at target I configure container as security.privileged: “yes” ( know that is not the best way for security…)

and besides not create containers hercules1 it deletes all files that has been transfered from source to target 8-(((

If it leaves at least all copied files and create container I could at least try somenting else..

any work around to this?

regards

security.NTACL is likely an xattr that can’t be directly set by userspace (rsync).
At the time of the transfer, it doesn’t make any difference if the container is going to be privileged or not, the transfer happens as root on the receiving end anyway.

You may be able to somehow strip the xattrs on the source side prior to performing the migration.

Hi Graber.

There were only 4 files with security.NTACL. By removing then I could finished migration…

Thanks very much for your time

sincerely

1 Like