Unable to import ISO (zfs, zvol issue)

I have incus installed and have been able to run containers and VMs successfully. I also have the web ui installed and running. My backend is ZFS and Incus is given a dataset for its use.

When I try to import an ISO, I get an error

$ incus storage volume import default archlinux-2024.08.03-x86_64.iso archlinux-2024.08.03-x86_64 --type=iso
Error: Failed creating custom volume from ISO: Failed creating volume: Failed to activate volume: Failed to locate zvol for "zos/incus/custom/default_archlinux-2024.08.03-x86_64.iso": context deadline exceeded

I can also see in the journal

(udev-worker)[5748]: zd32: Failed to remove/update device symlink '/dev/disk/by-diskseq/16', ignoring: Inappropriate ioctl for device

If I try with sudo then it tries to write the iso as a file under /dev

$ sudo incus storage volume import default archlinux-2024.08.03-x86_64.iso archlinux-2024.08.03-x86_64 --type=iso
Error: Failed creating custom volume from ISO: Failed creating volume: write /dev/zvol/zos/incus/custom/default_archlinux-2024.08.03-x86_64.iso: no space left on device

and the journal shows

zd32: Failed to remove/update device symlink '/dev/zvol/zos/incus/custom/default_archlinux-2024.08.03-x86_64.iso', ignoring: Inappropriate ioctl for device

My pool is called zos and Incus has been given zos/incus dataset. My Incus version is 6.4 and my ZFS is zfs-2.2.4-1, running on Arch Linux.

I’m not sure what the problem is here, everything else systemwide appears to be working as it should, and the instances I have running are also fine as far as I can tell. Not sure if this is an issue with Incus or ZFS.

Any pointers to help troubleshoot this ?

There’s a message in there that says No space left on device.

Here’s me trying this out, importing an ISO as a custom storage volume when Incus is using ZFS from a dataset.

$ incus launch images:ubuntu/24.04/cloud incusZFSdataset --vm
Launching incusZFSdataset
$ incus ubuntu incusZFSdataset
Error: VM agent isn't currently running
$ incus ubuntu incusZFSdataset
sudo: unknown user ubuntu
sudo: error initializing audit plugin sudoers_audit
$ incus ubuntu incusZFSdataset
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@incusZFSdataset:~$ sudo apt install -y incus zfsutils-linux wget 
...
ubuntu@incusZFSdataset:~$ sudo fallocate -l 5G /VOLUME
ubuntu@incusZFSdataset:~$ sudo losetup /dev/loop2 /VOLUME 
ubuntu@incusZFSdataset:~$ sudo zpool create zfsdisk /dev/loop2
ubuntu@incusZFSdataset:~$ zfs list
NAME      USED  AVAIL  REFER  MOUNTPOINT
zfsdisk   102K  4.36G    24K  /zfsdisk
ubuntu@incusZFSdataset:~$ sudo incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfsdisk/incus
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 

ubuntu@incusZFSdataset:~$ zfs list
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
zfsdisk                                  584K  4.36G    24K  /zfsdisk
zfsdisk/incus                            288K  4.36G    24K  legacy
zfsdisk/incus/buckets                     24K  4.36G    24K  legacy
zfsdisk/incus/containers                  24K  4.36G    24K  legacy
zfsdisk/incus/custom                      24K  4.36G    24K  legacy
zfsdisk/incus/deleted                    144K  4.36G    24K  legacy
zfsdisk/incus/deleted/buckets             24K  4.36G    24K  legacy
zfsdisk/incus/deleted/containers          24K  4.36G    24K  legacy
zfsdisk/incus/deleted/custom              24K  4.36G    24K  legacy
zfsdisk/incus/deleted/images              24K  4.36G    24K  legacy
zfsdisk/incus/deleted/virtual-machines    24K  4.36G    24K  legacy
zfsdisk/incus/images                      24K  4.36G    24K  legacy
zfsdisk/incus/virtual-machines            24K  4.36G    24K  legacy
ubuntu@incusZFSdataset:~$ wget https://cdimage.ubuntu.com/ubuntu-mini-iso/noble/daily-live/current/noble-mini-iso-amd64.iso
...
ubuntu@incusZFSdataset:~$ sudo incus storage list
+---------+--------+---------------+-------------+---------+---------+
|  NAME   | DRIVER |    SOURCE     | DESCRIPTION | USED BY |  STATE  |
+---------+--------+---------------+-------------+---------+---------+
| default | zfs    | zfsdisk/incus |             | 1       | CREATED |
+---------+--------+---------------+-------------+---------+---------+
ubuntu@incusZFSdataset:~$ sudo incus storage volume import default noble-mini-iso-amd64.iso mini-iso --type iso
ubuntu@incusZFSdataset:~$ sudo incus storage volume list default
+--------+----------+-------------+--------------+---------+
|  TYPE  |   NAME   | DESCRIPTION | CONTENT-TYPE | USED BY |
+--------+----------+-------------+--------------+---------+
| custom | mini-iso |             | iso          | 0       |
+--------+----------+-------------+--------------+---------+
ubuntu@incusZFSdataset:~$ zfs list
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
zfsdisk                                    82.1M  4.28G    24K  /zfsdisk
zfsdisk/incus                              81.8M  4.28G    24K  legacy
zfsdisk/incus/buckets                        24K  4.28G    24K  legacy
zfsdisk/incus/containers                     24K  4.28G    24K  legacy
zfsdisk/incus/custom                       81.5M  4.28G    24K  legacy
zfsdisk/incus/custom/default_mini-iso.iso  81.5M  4.28G  81.5M  -
zfsdisk/incus/deleted                       144K  4.28G    24K  legacy
zfsdisk/incus/deleted/buckets                24K  4.28G    24K  legacy
zfsdisk/incus/deleted/containers             24K  4.28G    24K  legacy
zfsdisk/incus/deleted/custom                 24K  4.28G    24K  legacy
zfsdisk/incus/deleted/images                 24K  4.28G    24K  legacy
zfsdisk/incus/deleted/virtual-machines       24K  4.28G    24K  legacy
zfsdisk/incus/images                         24K  4.28G    24K  legacy
zfsdisk/incus/virtual-machines               24K  4.28G    24K  legacy
ubuntu@incusZFSdataset:~$ logout
$ 

I’m going to retry similar.

Just wanted to say that the “no space left on device” is due to it trying to write the iso content into /dev which is on a tmpfs and not big enough for it. I think that it is failing to set up a device (zvol?) but still using it’s device path. I can see a massive regular file at /dev/zd32 that takes /dev to 100%. That’s what it looks like to me anyway.

I’ll do some more testing and report back…

I have reproduced this problem on a clean install on a new physical machine. Sorry for the delay, I was away and also needed to grab a spare machine to test on. FYI I am using Arch Linux with ZFS:

$ uname -a
Linux testlap 6.9.7-arch1-1 #1 SMP PREEMPT_DYNAMIC Fri, 28 Jun 2024 04:32:50 +0000 x86_64 GNU/Linux
$ incus version
Client version: 6.4
Server version: 6.4
$ zfs version
zfs-2.2.4-1
zfs-kmod-2.2.4-1

I followed similar steps to @simos and got the same error. The reason that I see this error is because my /dev is a small tmpfs and Incus tries to write the iso as a regular file onto /dev and it does not fit.

$ findmnt /dev
TARGET SOURCE   FSTYPE   OPTIONS
/dev   devtmpfs devtmpfs rw,nosuid,size=4096k,nr_inodes=4080244,mode=755,inode64
$ df -h /dev
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        4.0M     0  4.0M   0% /dev

I set up Incus exactly as @simos illustrated:

$ incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (zfs, dir) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zos/incus
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]: yes
Address to bind to (not including port) [default=all]:
Port to bind to [default=8443]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:

$ zfs list
NAME                                 USED  AVAIL  REFER  MOUNTPOINT
zos                                 3.09G   919G    24K  none
zos/arch                            3.09G   919G  3.08G  none
zos/incus                            288K   919G    24K  legacy
zos/incus/buckets                     24K   919G    24K  legacy
zos/incus/containers                  24K   919G    24K  legacy
zos/incus/custom                      24K   919G    24K  legacy
zos/incus/deleted                    144K   919G    24K  legacy
zos/incus/deleted/buckets             24K   919G    24K  legacy
zos/incus/deleted/containers          24K   919G    24K  legacy
zos/incus/deleted/custom              24K   919G    24K  legacy
zos/incus/deleted/images              24K   919G    24K  legacy
zos/incus/deleted/virtual-machines    24K   919G    24K  legacy
zos/incus/images                      24K   919G    24K  legacy
zos/incus/virtual-machines            24K   919G    24K  legacy

$ incus storage list
+---------+--------+-------------+---------+---------+
|  NAME   | DRIVER | DESCRIPTION | USED BY |  STATE  |
+---------+--------+-------------+---------+---------+
| default | zfs    |             | 1       | CREATED |
+---------+--------+-------------+---------+---------+

I then try to import an iso (notice this isn’t a tiny iso; it’s 1.2G):

$ incus storage volume import default archlinux-2024.08.03-x86_64.iso archlinux-2024.08.03-x86_64 --type iso
Error: Failed creating custom volume from ISO: Failed creating volume: write /dev/zvol/zos/incus/custom/default_archlinux-2024.08.03-x86_64.iso: no space left on device

I believe this happens because, for some reason, Incus is trying to write the iso to /dev as a regular file and it doesn’t fit:

$ df -h /dev
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        4.0M  4.0M     0 100% /dev

$ ls -l /dev/zvol/zos/incus/custom/default_archlinux-2024.08.03-x86_64.iso
lrwxrwxrwx 1 root root 15 Sep  1 09:28 /dev/zvol/zos/incus/custom/default_archlinux-2024.08.03-x86_64.iso -> ../../../../zd0

$ ls -l /dev/zd0
-rw------- 1 root root 4194304 Sep  1 09:28 /dev/zd0

$ file /dev/zd0
/dev/zd0: regular file, no read permission

If @simos could you check if your imported iso is also a regular file on /dev ?

I don’t think I have any strange setup - this is a straight install from an Arch Linux ISO (updated to add zfs) and the only thing added beyond the base system is Incus.

I’ll keep this machine available to do more troubleshooting.

I’ve now seen this work and then not work. I’ve tried lots of things and, every time I think I’ve hit something, another test disproves it. So I can’t definitvely say what makes it work vs not work.

When it works, the iso is stored as a dataset and no space is consumed on /dev (as one would expect).

When it does happen find /dev -type f will list a regular file and df /dev shows 100% full. I can “fix” it by cleaning that up. But it only works for me once or twice before breaking again.

uname -r         ==>6.9.7-arch1-1
incus --version  ==> 6.4
zfs --version    ==> zfs-2.2.4-1

Is there a reproducible that can be extracted from this?

i.e. write a script that imports the ISO into a storage volume. Then, delete it, and import again until it fails. Or something like that?

Ok, here is a test script that I made to soak-test this.

#!/bin/sh
tries="${1:-5}"
iso=archlinux-2024.08.03-x86_64.iso
test -f "$iso" || { echo "can't find $iso"; exit 1; }
good=0
bad=0
for n in $(seq 1 $tries)
do
echo "------------------------------------------------------------------ $n of $tries ($good/$bad)"
  df -h /dev
  vol="${iso%.*}-$n"
  echo "$vol"
  if ! incus storage volume import default "$iso" "$vol" --type iso
  then
    ((bad+=1))
    echo "Failed"
    df -h /dev

    for f in $(find /dev -type f)
    do
      echo "Fixing file $f"
      sudo find -L /dev -samefile "$f" -not -path "/dev/fd/*" -delete 2>/dev/null
    done
    df -h /dev

  else
    echo "OK"
    ((good+=1))
    incus storage volume list default | grep "$vol"
    incus storage volume delete default "$vol"
  fi

done

echo "------------------------------------------------------------------"
echo "Total imports good:$good bad:$bad"

Subsititue a suitably sized iso of your choice. Run with ./test.sh 50 for 50 iterations, etc.

Some testing I did yesterday, a server and a laptop - both same os/incus versions as shown in earlier post (new os instal, zfs, one pool, root on zfs, incus on zfs).

machine result
laptop good 46 49 33 47 45 46 48
bad 4 1 17 3 5 4 2
server good 7 23 13 26 21
bad 13 27 37 24 29

I also tested a vm (qemu+kvm, same os/incus versions) but it never failed in 500 tests.

Thanks for the script.
I tried with the default values on the host (also in a VM but as you mentioned you did not get a fail there, I did not keep that running).
I could not get a bad state.

Machine Result Run #1 Run #2
Host #1 good 20 30
bad 0 0

OK thanks for trying it. I don’t really know what else to try. I can work around the problem (and I don’t really need to import ISOs - there are other ways for the odd occasion I might want one).

incus config device add myvm archiso disk source=/path/to/my.iso boot.priority=2

Is it worth adding an issue about this on GitHub, do you think?

Yes, it’s worth adding it as an issue on Github, including the detailed word you did on this. It looks like a race condition, and those are difficult to identify.
I think it’s either a general race condition that also may affect others, or a case where some system configuration manages to produce this issue.

I suggest to have (on a separate terminal) this command running.

incus monitor --pretty

If someone else can try out the script on their host, it would be immensely useful.

Github issue, with some additional logging.

1 Like