Error doing a lxc copy

Hi
I’m trying to do a lxc copy between two hosts. At the end of the copy (I see the 18G copied on the /containers/migration.996079115 folder on the target), I get the error message below. Initially thinking of a network issue, I’ve retried twice with the same effect.

Both hosts are using btrfs. Target disk is nearly empty. I’m not sure to understand why the message speaks about ./boot

What does that message mean? What can I do to fix it?
Thank you!

lxc copy tesct backupserver: --debug

Error: Failed instance creation:

  • https://10.0.0.2:8443: Error transferring instance data: Unable to connect to: 10.0.0.2:8443
  • https://192.168.1.15:8443: Error transferring instance data: migration pre-dump failed
    (00.029285) Warn (compel/arch/x86/src/lib/infect.c:281): Will restore 2789 with interrupted system call
    (00.031322) Warn (compel/arch/x86/src/lib/infect.c:281): Will restore 2790 with interrupted system call
    (00.149455) Warn (compel/arch/x86/src/lib/infect.c:281): Will restore 3230 with interrupted system call
    (00.220372) Warn (compel/arch/x86/src/lib/infect.c:281): Will restore 3637 with interrupted system call
    (00.408956) Error (criu/mount.c:1077): mnt: The file system 0x2d 0x2d (0x47) btrfs ./boot is inaccessible
    (00.408958) Error (criu/fsnotify.c:212): fsnotify: Can’t open mount for s_dev 2d, continue
    (00.408962) Error (criu/mount.c:1077): mnt: The file system 0x2d 0x2d (0x47) btrfs ./bin is inaccessible
    (00.408964) Error (criu/fsnotify.c:212): fsnotify: Can’t open mount for s_dev 2d, continue
    (00.408971) Error (criu/mount.c:1077): mnt: The file system 0x2d 0x2d (0x47) btrfs ./ is inaccessible
    (00.408972) Error (criu/fsnotify.c:212): fsnotify: Can’t open mount for s_dev 2d, continue
    (00.408984) Error (criu/mount.c:1077): mnt: The file system 0x2d 0x2d (0x47) btrfs ./boot is inaccessible
    (00.408985) Error (criu/fsnotify.c:212): fsnotify: Can’t open mount for s_dev 2d, continue
    (00.408988) Error (criu/mount.c:1077): mnt: The file system 0x2d 0x2d (0x47) btrfs ./bin is inaccessible
    (00.408990) Error (criu/fsnotify.c:212): fsnotify: Can’t open mount for s_dev 2d, continue
    (00.408997) Error (criu/mount.c:1077): mnt: The file system 0x2d 0x2d (0x47) btrfs ./ is inaccessible
    (00.408999) Error (criu/fsnotify.c:212): fsnotify: Can’t open mount for s_dev 2d, continue
    (00.409012) Error (criu/mount.c:1077): mnt: The file system 0x2d 0x2d (0x47) btrfs ./ is inaccessible
    (00.409014) Error (criu/fsnotify.c:212): fsnotify: Can’t open mount for s_dev 2d, continue
    (00.409015) Warn (criu/fsnotify.c:288): fsnotify: Handle 0x2d:0x7be0f cannot be opened
    (00.418990) Error (criu/irmap.c:86): irmap: Can’t stat /var/lib/polkit-1/localauthority: No such file or directory
    (00.419071) Error (criu/irmap.c:86): irmap: Can’t stat /no-such-path: No such file or directory
    (00.419072) Error (criu/fsnotify.c:291): fsnotify: Can’t dump that handle
    (00.419073) Error (criu/irmap.c:360): irmap: Failed to resolve 2d:7be0f
    (00.419088) Error (criu/cr-dump.c:1567): Pre-dumping FAILED.

You’re attempting live-migration which is currently extremely unreliable due to limitations in CRIU and the glue code we have for it in liblxc.

We have some work planned for the next 6 months to make this much more reliable.

For now, if you absolutely must use the feature, you’re going to have to remove all network devices from the container prior to migration (yeah, that’s very annoying) and add them back on the target. Additionally, any source container that’s running systemd is likely to fail migrating.

If you don’t care about live migration, just pass --stateless to that copy which should then work just fine.

Hi Stephane,
I’m a little lost with “lxc copy”, snapshot and “copy --refresh”.

The CT to backup has regular snapshots created (let’s say snapshots S0, S1, … Sn)

  1. Which command shoud I emmit to remotely backup S0…Sn, without the live state, and if possible without stopping the CT? Is that “lxc copy CT remoteserver: --stateless” or should I copy snaphost by snapshot?

  2. In the future, if I have more snapshots (S0…Sm with m>n), what should be the command?

Thank you

–refresh --stateless

Thank you!