I have a VM into which I’ve passed several HDDs (actually, passing the PCI device controlling them). In the VM, I’ve created a zfs pool using those drives.
The problem: When IncusOS boots, it attempts to import that pool. This fails, no keys, and IncusOS hangs with an error to that effect. In this particular case those HDDs are in an external chassis which can be powered off during boot allowing IncusOS to boot. I can then power it back on, and start the appropriate instance and all is well.
How can I mask either those drives, the pci device controlling them, or the zfs pool during IncusOS boot?
edit: I’ve also just noticed that, now, after all of the above, running the command:
incus admin os system show resources
results in this error (and no resource list):
Error: Failed to retrieve storage information: Failed to find “/dev/disk/by-id/ata-ST6000NM0115-1YZ110_ZAD4YQJ4”: lstat /dev/sda: no such file or directory
Seeing the actual log messages would be very helpful here.
If you’re seeing WARN Refusing to import unencrypted ZFS pool ‘mypool’, that’s simply a warning that IncusOS is refusing to import an existing unencrypted ZFS pool. Other than data on that pool not being available, the system will work as normal.
On the other hand, if you’re seeing some sort of an ERROR about loading an encryption key or otherwise importing a ZFS pool, then something’s going wrong. IncusOS will only attempt to decrypt ZFS pools that it currently has a raw encryption key for stored on the local system drive.
The error you’re getting when attempting to get a list of the system resources indicates that the symlink under /dev/disk/by-id/ has become broken. I’d guess a combination of externally powering the drive controller off/on and/or udev not properly triggering is responsible for that.
[2025/11/17 14:55:49 MST] incus-osd: 2025-11-17 21:55:49 INFO Bringing up the local storage
[2025/11/17 14:55:50 MST] incus-osd: 2025-11-17 21:55:50 ERROR Failed to run: zpool import -a: exit status 1 (cannot import ‘znas’: pool was previously in use from another system.
[2025/11/17 14:55:50 MST] incus-osd: Last accessed by nas (hostid=2d2b92fe) at Mon Nov 17 17:36:49 2025
[2025/11/17 14:55:50 MST] incus-osd: The pool can be imported, use ‘zpool import -f’ to import the pool.)
[2025/11/17 14:55:51 MST] systemd: incus-osd.service: Main process exited, code=exited, status=1/FAILURE
I can post the entire log, if needed. (from incus admin os debug log, I’m unaware of any other options available to me right now (no remote logging server)).
It is an error importing the encrypted pool (from the VM “nas”).
When this happens, IncusOS restarts and hits the same error again and then restarts…. until the external chassis is powered off, then boot continues “normally”.
Yeah, so you need to first export the zpool if you want to move it between systems. (zpool export znas, then starting IncusOS should work.) I’m pretty sure that “sharing” a zpool between IncusOS and another system between power cycles is outside of our planned usage.
It gives us the best view of all the ZFS pools that are present. At the moment the storage API will return details about all zpools, even if not managed, which can be useful. We could adjust things so we only try to import a pool if we have its encryption key, but the issue that @gringo encountered would still happen when attempting to import an existing encrypted pool via the API, since the zpool import would be pushed down to that point in the code before encountering the error. (I don’t want to add -f to force import pools, since that potentially could be dangerous.)
I’m not sharing it between systems. The ZNAS pool is only used inside a VM.
The problem is that IncusOS is trying to import it when it shouldn’t be (or, at least, when I don’t want it to :-).
The underlying problem is that the HDDs used for the pool are attached to the host and passed into the VM via PCI passthrough. So, when IncusOS boots, it sees the disks and tries to import the pool.
If we can mask specific pools, disks-by-id, or PCI devices from IncusOS, that would solve/work-around the issue.