Error when moving container(s) to other project

I am getting a strange error when I try to move containers to another project. I have three containers, all of them giving the same error when I try to move them in a stopped state to the other project:

Failed creating instance record: Unknown configuration key: volatile.apply_nvram

I have no idea what that means as these are containers that do not have this config key defined anywhere. Here is a config of one of the containers:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine edge amd64 (20260302_13:00)
  image.os: Alpine
  image.release: edge
  image.requirements.secureboot: 'false'
  image.serial: '20260302_13:00'
  image.type: squashfs
  image.variant: default
  volatile.base_image: 0d9ff68f8505bf3c0da75fbd6ed9f9d58e571ed105bcbdc8ef182b6c240a3245
  volatile.cloud-init.instance-id: fc7729d7-6942-4d89-96cb-ed0253439823
  volatile.eth0.hwaddr: 10:66:6a:1b:2c:a0
  volatile.eth0.name: eth0
  volatile.idmap.base: '0'
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
  volatile.uuid: 4ebb4888-704d-48db-8634-5fbf940bacf2
  volatile.uuid.generation: 4ebb4888-704d-48db-8634-5fbf940bacf2
devices: {}
ephemeral: false
profiles:
  - default
  - prod
stateful: false
description: ''
created_at: '2026-03-03T22:02:45.524594486Z'
name: openerp
status: Stopped
status_code: 102
last_used_at: '2026-03-04T11:15:53.678688843Z'
location: incus2
type: container
project: default

And here the two profiles that are applied:

name: default
description: Basis config
devices:
  root:
    path: /
    pool: default
    type: disk
config:
  security.secureboot: 'false'
project: default

name: prod
description: Server Instances (VLAN 50)
devices:
  eth0:
    nictype: bridged
    parent: vmbr0
    type: nic
    vlan: '50'
config: {}
project: default

looks like I’m experiencing a similar error when I’m trying to copy one of my containers.

╭─root at arch-laptop in /var/lib/incus/
╰─○ incus list -c nsum4Nt
+------------+---------+-----------+--------------+----------------------+-----------+-----------------+
|    NAME    |  STATE  | CPU USAGE | MEMORY USAGE |         IPV4         | PROCESSES |      TYPE       |
+------------+---------+-----------+--------------+----------------------+-----------+-----------------+
| d13-cc     | RUNNING | 1s        | 96.65MiB     | 10.218.57.76 (eth0)  | 13        | CONTAINER       |
+------------+---------+-----------+--------------+----------------------+-----------+-----------------+
| d13-oc     | RUNNING | 7s        | 669.73MiB    | 10.218.57.212 (eth0) | 59        | CONTAINER       |
+------------+---------+-----------+--------------+----------------------+-----------+-----------------+
| rk9-vm     | STOPPED |           |              |                      |           | VIRTUAL-MACHINE |
+------------+---------+-----------+--------------+----------------------+-----------+-----------------+
╭─root at arch-laptop in /var/lib/incus/
╰─○ incus copy d13-oc d13-oc-wc
Error: Failed creating instance record: Unknown configuration key: volatile.apply_nvram
╭─root at arch-laptop in /var/lib/incus/
╰─○ incus config show d13-oc
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian trixie amd64 (20260205_05:24)
  image.os: Debian
  image.release: trixie
  image.serial: "20260205_05:24"
  image.type: squashfs
  image.variant: cloud
  volatile.base_image: 2a5984a45bb5cc348488481b090a9eda49bdcc4d1668bedc49ee06f5e3f08ab2
  volatile.cloud-init.instance-id: 5ab5cf65-f417-409d-8742-fc23720dff61
  volatile.eth0.host_name: vethafdfd885
  volatile.eth0.hwaddr: 10:66:6a:f4:4a:9e
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: cf53f7ff-82e5-4127-9223-0af3d080368f
  volatile.uuid.generation: cf53f7ff-82e5-4127-9223-0af3d080368f
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""

Update:

After a config in default profile got unset, the issue is resolved.

╭─root at arch-laptop in /var/lib/incus/
╰─○ incus profile show default
config:
  security.secureboot: "false"
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: br-incus0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/arch-proxy
- /1.0/instances/rk9-vm
- /1.0/instances/d13-cc
- /1.0/instances/d13-oc
project: default
╭─root at arch-laptop in /var/lib/incus/
╰─○ incus profile unset default security.secureboot
╭─root at arch-laptop in /var/lib/incus/
╰─○ incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: br-incus0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/arch-proxy
- /1.0/instances/rk9-vm
- /1.0/instances/d13-cc
- /1.0/instances/d13-oc
- /1.0/instances/d13-oc-wc
project: default

Yes, that also works for my issue. So the workflow would be to remove security.secureboot, copy or move the container(s), and then reactivate the secureboot setting.

To me that still looks like a bug. Should we report it somewhere on a bug tracker?

Yep, that might be a bug. A vm related config (if it really is), is not supposed to affect the copy of containers IMHO.
I did a little reseach when the issue happened, but there was nothing more from google except this post. I believed there must be something different in the config that could bring this weird behavior which doesn’t exist on my several other hosts - so it was either in container conf or the profile - and bingo.