Lxd 4.0.4 have snapshot move container failed

,

lxd version 4.0.4
storage zfs
1、create container
2、create snapshot
3、lxc move instance --target ***
Error: Migration operation failure: Copy instance operation failed: Failed instance creation: Error transferring instance data: Create instance: Invalid devices: Failed detecting root disk device: No root device could be found

Please can you show the output of lxc config show <instance> as well as lxc profile show <profile> from the source and the target?

lxc config show ins-9a354e59064c40d0a49ba8845a8707ed
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Centos 8-Stream arm64 (20220316_07:08)
  image.os: Centos
  image.release: 8-Stream
  image.serial: "20220316_07:08"
  image.type: squashfs
  limits.cpu: "2"
  limits.memory: 2048MB
  security.privileged: "true"
  user.meta-data: "#cloud-config\nlocal-hostname: "
  user.name: test
  user.user-data: |-
    #cloud-config
    chpasswd:
    list: |
                    root:P@ssw0rd
            expire: false
            ssh_pwauth: true
  volatile.base_image: b56c78ff2d6232260af95b519e29277930415206b91042f74ed4b9d4cc09c59c
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
devices:
  'Bus 003 Device 004: ID 1d6b:0104':
    productid: "0104"
    type: usb
    vendorid: 1d6b
  root:
    path: /
    pool: zfspool
    size: 1GB
    type: disk
ephemeral: false
profiles: []
stateful: false
description: ""
lxc profile show default
config: {}
description: Default LXD profile for project 02b16d224a5e4710a3fbc0bcce8a27dc
devices: {}
name: default
used_by: []

i m try to set lxc profile add default root disk path=/ pool=zfspool,but Not effective

Can you show output of lxc config show <instance>/<snapshot>?

Also does it work if there is no snapshot on the instance?

of course

lxc config show  ins-ed974686be4a4b2984db083fbeab92c5/zzz
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Centos 8-Stream arm64 (20220316_07:08)
  image.os: Centos
  image.release: 8-Stream
  image.serial: "20220316_07:08"
  image.type: squashfs
  limits.cpu: "1"
  limits.memory: 1024MB
  security.privileged: "true"
  user.meta-data: "#cloud-config\nlocal-hostname: "
  user.name: ttt
  user.user-data: |-
    #cloud-config
    chpasswd:
    list: |
                    root:P@ssw0rd
            expire: false
            ssh_pwauth: true
  volatile.base_image: b56c78ff2d6232260af95b519e29277930415206b91042f74ed4b9d4cc09c59c
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.tap4110b5fd-b0.host_name: tin4110b5fd-b0
  volatile.tap4110b5fd-b0.last_state.created: "false"
  volatile.tap4110b5fd-b0.last_state.hwaddr: 0e:c8:19:80:c9:e7
  volatile.tap4110b5fd-b0.last_state.mtu: "1500"
  volatile.tap4110b5fd-b0.name: eth0
devices:
  root:
    path: /
    pool: zfspool
    size: 1GB
    type: disk
ephemeral: false
profiles: []
expires_at: 0001-01-01T00:00:00Z

no snapshot can move success,have snapshot Most cases are failed

ps -ef |grep ins-fe39e9ac24c84010967dc1b460c10c7b
root 2132812 2131748 0 18:30 pts/15 00:00:00 grep ins-fe39e9ac24c84010967dc1b460c10c7b
root 2787137 94630 0 3月17 ? 00:00:00 zfs send -c -L zfspool/containers/02b16d224a5e4710a3fbc0bcce8a27dc_ins-fe39e9ac24c84010967dc1b460c10c7b@snapshot-rtrt

this process will always exist,The snapshot cannot be deleted

Thanks, it shouldn’t be needed as you’re container has its own root disk (shown from lxc config show without the --expanded flag).

Can you try refreshing to the latest 4.0 LTS release, LXD 4.0.9

snap refresh lxd --channel=4.0/stable

Which should get you:

4.0/stable: 4.0.9 2022-02-25 (22526) 71MB -

That will rule out if the issue has already been fixed in a later release.

thanks,i’ll try it