ZFS pool unavailable after lxd-to-incus migration

Hi, I’ve been unable to start an incus container after running the lxd-to-incus migration due to the storage being unavailable. The zfs pool was migrated from lxd but I’m unable to find it while using zpool list. I then ran incus admin init to see if another “default” pool can be created and found by zpool, and it is but no cPool.

# incus start jammyTest
Error: Storage pool "cPool" unavailable on this server
# incus storage list
+---------+--------+----------------------------------+-------------+---------+-------------+
|  NAME   | DRIVER |              SOURCE              | DESCRIPTION | USED BY |    STATE    |
+---------+--------+----------------------------------+-------------+---------+-------------+
| cPool   | zfs    | /var/lib/incus/disks/cPool.img   |             | 6       | UNAVAILABLE |
+---------+--------+----------------------------------+-------------+---------+-------------+
| default | zfs    | /var/lib/incus/disks/default.img |             | 0       | CREATED     |
+---------+--------+----------------------------------+-------------+---------+-------------+
# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default  7.50G   633K  7.50G        -         -     0%     0%  1.00x    ONLINE  -
# 

When trying to import “cPool” I get an error stating the pool does not exist:

# zpool import -dFmn /dev/disk/by-partuuid/device-address/var/lib/incus/disks/cPool.img
cannot import '/dev/disk/by-partuuid/device-address/var/lib/incus/disks/cPool.img': no such pool available

Anyone able to assist?
Thanks

Can you show incus storage show cPool as well as ls -lh /var/lib/incus/disks/?

Assuming cPool.img does in fact exist in /var/lib/incus/disks/, then try to manually run:

zpool import -f -d /var/lib/incus/disks/ cPool

Here is the output:

# incus storage show cPool
config:
  size: 20GiB
  source: /var/lib/incus/disks/cPool.img
  zfs.pool_name: cPool
description: ""
name: cPool
driver: zfs
used_by:
- /1.0/instances/jammyTest
- /1.0/instances/nexus
- /1.0/instances/pbx3
- /1.0/instances/pve
- /1.0/profiles/ISOvm
- /1.0/profiles/default
status: Unavailable
locations:
- none

# ls -lh /var/lib/incus/disks/ 
total 2.0G
-rw------- 1 root root  20G Feb 12 02:12 cPool.img
-rw------- 1 root root 8.0G Feb 11 16:39 default.img

Now the attempt the import:

# zpool import -f -d /var/lib/incus/disks/ cPool
cannot import 'cPool': I/O error
        Recovery is possible, but will result in some data loss.
        Returning the pool to its state as of Thu Oct 12 13:35:53 2023
        should correct the problem.  Approximately 5 minutes of data
        must be discarded, irreversibly.  After rewind, at least
        one persistent user-data error will remain.  Recovery can be attempted
        by executing 'zpool import -F cPool'.  A scrub of the pool
        is strongly recommended after recovery.

# zpool import -Ff -d /var/lib/incus/disks/ cPool
# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
cPool    19.5G  1.77G  17.7G        -         -     3%     9%  1.00x    ONLINE  -
default  7.50G   633K  7.50G        -         -     0%     0%  1.00x    ONLINE  -

# incus start jammyTest
# incus list
+-----------+---------+------+-----------------------------------------------+-----------------+-----------+
|   NAME    |  STATE  | IPV4 |                     IPV6                      |      TYPE       | SNAPSHOTS |
+-----------+---------+------+-----------------------------------------------+-----------------+-----------+
| jammyTest | RUNNING |      | 4c39 (eth0) | VIRTUAL-MACHINE | 0         |
+-----------+---------+------+-----------------------------------------------+-----------------+-----------+
| nexus     | STOPPED |      |                                               | VIRTUAL-MACHINE | 0         |
+-----------+---------+------+-----------------------------------------------+-----------------+-----------+
| pbx3      | RUNNING |      | 1f71 (eth0) | VIRTUAL-MACHINE | 0         |
+-----------+---------+------+-----------------------------------------------+-----------------+-----------+
| pve       | STOPPED |      |                                               | VIRTUAL-MACHINE | 0         |
+-----------+---------+------+-----------------------------------------------+-----------------+-----------+

The issue is fixed!
Thanks a great deal!