Recover incus containers

I’m trying to recover from a system crash and recover the incus LVM storage. I’ve installed Zabby Incus. I’m trying to follow the recovery tutorial ( How to recover or reconnect an Incus storage pool – Mi blog lah! ) but I’m having trouble translating from zfs to lvm.The VGname is lxd-pool. When I run “incus admin recover” I get the following:

# incus admin recover
This server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: lxd-pool
Name of the storage backend (btrfs, dir, lvm, truenas): lvm
Source of the storage pool (block device, volume group, dataset, path, … as applicable): /dev/lxd-pool
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]:
The recovery process will be scanning the following storage pools:
NEW: “lxd-pool” (backend=“lvm”, source=“/dev/lxd-pool”)
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]:no
Scanning for unknown volumes…Error: Failed validation request: Failed mounting pool “lxd-pool”: Cannot mount pool as “lvm.vg_name” is not specified

One problem is that I don’t know what to specify for the source of the storage pool. /dev/lxd-pool doesn’t exist. If I specify /dev/nvme0n1p3, where the LVM pv is located I get the ame error message.

That should be the VG name though the error you’re getting makes me think that we may be missing some LVM specific logic in there.

I’ve tried Source = lxd-pool and I get the same error.

I noticed that the LVname is LXDThinPool, not lxd-pool. Using that doesn’t make any difference.

Is there a way to mount a container’s LV?

Can you file an issue at Sign in to GitHub · GitHub?

As for mounting a container’s LV, you should be able to do it with lvchange -ay -ky VG/LV and then mount the entry from under /dev/VG/LV.

Created “Incus admin recover fails for LVM storage #2823Incus admin recover fails for LVM storage · Issue #2823 · lxc/incus · GitHub

The “lvchange” command doesn’t appear to work:

#lvchange -ay -ky lxd-pool/containers_fedora37

Logical volume lxd-pool/containers_fedora37 changed.
WARNING: Combining activation change with other commands is not advised.

#ls /dev/lxd-pool

ls: cannot access ‘/dev/lxd-pool’: No such file or directory

lvdisplay doesn’t show any change in the LV.

(Splitting the lvchange into two commands to address the warning doesn’t make any difference.)

Try lvchange -ay --ignoreactivationskip VG/LV

Thanks. That worked.

Can I tar the directory and use it with “incus import”?

Possibly. I think you’ll need to manually add in an index.yaml though as that’s something that’s in backups which isn’t in the instance folder.

I’ll give it a try. What’s in index.yaml? Can you give me an example?

stgraber@castiana:~$ incus create images:alpine/edge a1
Creating a1
stgraber@castiana:~$ incus export a1
Backup exported successfully!         
stgraber@castiana:~$ mkdir a1
stgraber@castiana:~$ cd a1
stgraber@castiana:~/a1$ tar zxf ../a1.tar.gz 
stgraber@castiana:~/a1$ cat backup/index.yaml 
name: a1
backend: zfs
pool: default
optimized: false
optimized_header: false
type: container
config:
  container:
    architecture: x86_64
    config:
      image.architecture: amd64
      image.description: Alpine edge amd64 (20260115_13:00)
      image.os: Alpine
      image.release: edge
      image.requirements.secureboot: "false"
      image.serial: "20260115_13:00"
      image.type: squashfs
      image.variant: default
      volatile.apply_template: create
      volatile.base_image: 08f4fdb05bbc429d0dc6a5c4f4b14b190f328349b4ccd8adb1138cd437e6ab1e
      volatile.cloud-init.instance-id: da6d2777-bdb4-4a2f-805f-4516fff65768
      volatile.eth0.hwaddr: 10:66:6a:0e:52:19
      volatile.idmap.base: "0"
      volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.idmap: '[]'
      volatile.uuid: a6dc12b5-36a7-44de-b8ea-6f3d1cde6bea
      volatile.uuid.generation: a6dc12b5-36a7-44de-b8ea-6f3d1cde6bea
    devices: {}
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""
    created_at: 2026-01-15T20:48:25.742689264Z
    expanded_config:
      image.architecture: amd64
      image.description: Alpine edge amd64 (20260115_13:00)
      image.os: Alpine
      image.release: edge
      image.requirements.secureboot: "false"
      image.serial: "20260115_13:00"
      image.type: squashfs
      image.variant: default
      volatile.apply_template: create
      volatile.base_image: 08f4fdb05bbc429d0dc6a5c4f4b14b190f328349b4ccd8adb1138cd437e6ab1e
      volatile.cloud-init.instance-id: da6d2777-bdb4-4a2f-805f-4516fff65768
      volatile.eth0.hwaddr: 10:66:6a:0e:52:19
      volatile.idmap.base: "0"
      volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.idmap: '[]'
      volatile.uuid: a6dc12b5-36a7-44de-b8ea-6f3d1cde6bea
      volatile.uuid.generation: a6dc12b5-36a7-44de-b8ea-6f3d1cde6bea
    expanded_devices:
      eth0:
        name: eth0
        network: incusbr0
        type: nic
      root:
        path: /
        pool: default
        type: disk
    name: a1
    status: Stopped
    status_code: 102
    last_used_at: 1970-01-01T00:00:00Z
    location: none
    type: container
    project: default
  pool:
    config:
      source: castiana/incus
      volatile.initial_source: castiana/incus
      zfs.pool_name: castiana/incus
    description: ""
    name: default
    driver: zfs
    used_by: []
    status: Created
    locations:
    - none
  profiles:
  - config: {}
    description: Default Incus profile
    devices:
      eth0:
        name: eth0
        network: incusbr0
        type: nic
      root:
        path: /
        pool: default
        type: disk
    name: default
    used_by: []
    project: default
  volume:
    config: {}
    description: ""
    name: a1
    type: container
    used_by: []
    location: none
    content_type: filesystem
    project: default
    created_at: 2026-01-15T20:48:25.742689264Z
stgraber@castiana:~/a1$