Removing snapshot fails, unlinkat: read-only file system

I’m evaluating incus for the first time, so I’m sorry in advance for any beginner mistakes.
Everything is done on freshly installed machines in a test environment, there is nothing else installed except incus.
I’m incredibly impressed with and thankful for this software overall. Everything is so simple, and it Just Works!

Steps performed

  • Fresh install of Ubuntu 24.04 server, bare metal on workstation class desktops.
  • Fresh install incus from Zabbler stable apt repo.
  • Initialized incus, no clustering, local storage pool type dir on root file system.
  • Created windows vm.
  • Took one snapshot called clean-install, while instance is stopped.
  • Installed some stuff in the vm and rebooted a few times.
  • Took another snapshot called clean-base-config-drivers, while instance is stopped.
  • May or may not have clicked stop button while creating the second snapshot. Really not sure whether I did this. But I definitely did not abort the first one.
  • Enabled clustering on outpost-001.
  • Added a couple of cluster members, outpost-002, outpost-003.

Unexpected outcome

  • Unable to delete the clean-base-config-drivers snapshot. It fails with “read-only file system”.

Expected outcome

  • Able to delete the clean-install snapshot without error.

Observations

  • The snapshot is left in a mounted state. Not sure if this is expected, or if this is inconsistent state?
  • There are no errors in dmesg.
  • Everything else seems to work fine.

Questions

  • What is going on with this snapshot this error?
  • What are the expected file system permissions for machines and snapshots?
  • Is it safe to override the permissions and delete the instances and snapshots manually from the filesystem?
  • Is it possible to find a log of all incus operations?

Command outputs

outpost-001$ sudo incus snapshot delete win11test clean-base-config-drivers
Error: Failed to remove '/var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-base-config-drivers': unlinkat /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-base-config-drivers/config: read-only file system
outpost-001$ incus storage ls
+-------+--------+-------------+---------+---------+
| NAME  | DRIVER | DESCRIPTION | USED BY |  STATE  |
+-------+--------+-------------+---------+---------+
| local | dir    |             | 2       | CREATED |
+-------+--------+-------------+---------+---------+
outpost-001$ incus storage show local
config: {}
description: ""
name: local
driver: dir
used_by:
- /1.0/instances/win11test
- /1.0/profiles/default
status: Created
locations:
- outpost-001
- outpost-002
- outpost-003
outpost-001$ incus ls
+-----------+---------+------+------+-----------------+-----------+-------------+
|   NAME    |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |  LOCATION   |
+-----------+---------+------+------+-----------------+-----------+-------------+
| win11test | STOPPED |      |      | VIRTUAL-MACHINE | 2         | outpost-001 |
+-----------+---------+------+------+-----------------+-----------+-------------+
server-1$ incus snapshot ls win11test
+---------------------------+----------------------+------------+----------+
|           NAME            |       TAKEN AT       | EXPIRES AT | STATEFUL |
+---------------------------+----------------------+------------+----------+
| clean-install             | 2025/10/21 09:26 UTC |            | NO       |
+---------------------------+----------------------+------------+----------+
| clean-base-config-drivers | 2025/10/21 16:47 UTC |            | NO       |
+---------------------------+----------------------+------------+----------+

Here is lsblk on the first server, and this looks weird to me.
I don’t think I have ever seen lsblk showing two mountpoints for one device like that.
I’m not sure not sure exactly what it means, or why this even happens when using dir storage driver.

outpost-001$ lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0  50.8M  1 loop /snap/snapd/25202
loop1                       7:1    0  73.9M  1 loop /snap/core22/2133
loop2                       7:2    0  10.4M  1 loop /snap/distrobuilder/2114
loop3                       7:3    0  73.9M  1 loop /snap/core22/2139
sda                         8:0    0 232.9G  0 disk
├─sda1                      8:1    0     1G  0 part /boot/efi
├─sda2                      8:2    0     2G  0 part /boot
└─sda3                      8:3    0 229.8G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0   100G  0 lvm  /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-base-config-drivers
                                                    /

Here is lsblk on another server. This output is expected, this looks normal to me.

outpost-003$ lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0 465.8G  0 disk
├─sda1                      8:1    0     1G  0 part /boot/efi
├─sda2                      8:2    0     2G  0 part /boot
└─sda3                      8:3    0 462.7G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0   100G  0 lvm  /

Here is output of ls, showing that root is indeed lacking necessary permissions.

outpost-001$ sudo ls -la /var/lib/incus/storage-pools/local
total 40
drwx--x--x 10 root root 4096 Oct 20 16:39 .
drwx--x--x  3 root root 4096 Oct 20 16:39 ..
drwx--x--x  2 root root 4096 Oct 20 16:39 buckets
drwx--x--x  2 root root 4096 Oct 21 14:48 containers
drwx--x--x  2 root root 4096 Oct 20 16:39 containers-snapshots
drwx--x--x  2 root root 4096 Oct 20 16:39 custom
drwx--x--x  2 root root 4096 Oct 20 16:39 custom-snapshots
drwx--x--x  2 root root 4096 Oct 20 16:39 images
drwx--x--x  3 root root 4096 Oct 20 16:55 virtual-machines
drwx--x--x  3 root root 4096 Oct 21 09:26 virtual-machines-snapshots
outpost-001$ sudo ls -la /var/lib/incus/storage-pools/local/virtual-machines
total 12
drwx--x--x  3 root  root 4096 Oct 20 16:55 .
drwx--x--x 10 root  root 4096 Oct 20 16:39 ..
d--x------  4 incus root 4096 Oct 21 14:49 win11test
outpost-001$ sudo ls -la /var/lib/incus/storage-pools/local/virtual-machines-snapshots
total 12
drwx--x--x  3 root root 4096 Oct 21 09:26 .
drwx--x--x 10 root root 4096 Oct 20 16:39 ..
drwx------  4 root root 4096 Oct 21 16:47 win11test
outpost-001$ sudo ls -la /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test
total 16
drwx------ 4 root  root 4096 Oct 21 16:47 .
drwx--x--x 3 root  root 4096 Oct 21 09:26 ..
d--x------ 4 incus root 4096 Oct 21 16:47 clean-base-config-drivers
d--x------ 4 incus root 4096 Oct 21 09:26 clean-install
outpost-001$ sudo ls -la /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-base-config-drivers
total 24248892
d--x------ 4 incus root        4096 Oct 21 16:47 .
drwx------ 4 root  root        4096 Oct 21 16:47 ..
-rw------- 1 incus root      540672 Oct 21 14:51 OVMF_VARS.4MB.ms.fd
-rw-r--r-- 1 root  root         704 Oct 20 17:00 agent-client.crt
-rw------- 1 root  root         288 Oct 20 17:00 agent-client.key
-rw-r--r-- 1 root  root         741 Oct 20 17:00 agent.crt
-rw------- 1 root  root         288 Oct 20 17:00 agent.key
-r-------- 1 root  root        4767 Oct 21 14:02 backup.yaml
dr-x------ 6 incus root        4096 Oct 21 14:02 config
lrwxrwxrwx 1 root  root          19 Oct 20 17:00 qemu.nvram -> OVMF_VARS.4MB.ms.fd
-rw------- 1 root  root 68719476736 Oct 21 16:51 root.img
drwx------ 2 root  root        4096 Oct 21 14:51 tpm.vtpm
outpost-001$ sudo ls -la /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-install
total 17662520
d--x------ 4 incus root        4096 Oct 21 09:26 .
drwx------ 4 root  root        4096 Oct 21 16:47 ..
-rw------- 1 incus root      540672 Oct 21 09:26 OVMF_VARS.4MB.ms.fd
-rw-r--r-- 1 root  root         704 Oct 20 17:00 agent-client.crt
-rw------- 1 root  root         288 Oct 20 17:00 agent-client.key
-rw-r--r-- 1 root  root         741 Oct 20 17:00 agent.crt
-rw------- 1 root  root         288 Oct 20 17:00 agent.key
-r-------- 1 root  root        2538 Oct 21 09:24 backup.yaml
dr-x------ 6 incus root        4096 Oct 21 09:24 config
lrwxrwxrwx 1 root  root          19 Oct 20 17:00 qemu.nvram -> OVMF_VARS.4MB.ms.fd
-rw------- 1 root  root 68719476736 Oct 21 09:29 root.img
drwx------ 2 root  root        4096 Oct 21 09:26 tpm.vtpm

Output of df and mount

outpost-001$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
tmpfs                              2.0G  1.4M  2.0G   1% /run
efivarfs                           384K  100K  280K  27% /sys/firmware/efi/efivars
/dev/mapper/ubuntu--vg-ubuntu--lv   98G   81G   13G  87% /
tmpfs                              9.8G     0  9.8G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
/dev/sda2                          2.0G  101M  1.7G   6% /boot
/dev/sda1                          1.1G  6.2M  1.1G   1% /boot/efi
tmpfs                              9.8G     0  9.8G   0% /run/qemu
tmpfs                              100K     0  100K   0% /var/lib/incus/shmounts
tmpfs                              100K     0  100K   0% /var/lib/incus/guestapi
tmpfs                              2.0G   16K  2.0G   1% /run/user/1001
outpost-001$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=10164364k,nr_inodes=2541091,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=2040800k,mode=755,inode64)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/ubuntu--vg-ubuntu--lv on / type ext4 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=2645)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,nosuid,nodev,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda2 on /boot type ext4 (rw,relatime)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
/var/lib/snapd/snaps/snapd_25202.snap on /snap/snapd/25202 type squashfs (ro,nodev,relatime,errors=continue,threads=single,x-gdu.hide,x-gvfs-hide)
/var/lib/snapd/snaps/core22_2133.snap on /snap/core22/2133 type squashfs (ro,nodev,relatime,errors=continue,threads=single,x-gdu.hide,x-gvfs-hide)
/var/lib/snapd/snaps/distrobuilder_2114.snap on /snap/distrobuilder/2114 type squashfs (ro,nodev,relatime,errors=continue,threads=single,x-gdu.hide,x-gvfs-hide)
tmpfs on /run/qemu type tmpfs (rw,nosuid,nodev,relatime,mode=755,inode64)
lxcfs on /var/lib/incus-lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /var/lib/incus/shmounts type tmpfs (rw,relatime,size=100k,mode=711,inode64)
tmpfs on /var/lib/incus/guestapi type tmpfs (rw,relatime,size=100k,mode=755,inode64)
/var/lib/snapd/snaps/core22_2139.snap on /snap/core22/2139 type squashfs (ro,nodev,relatime,errors=continue,threads=single,x-gdu.hide,x-gvfs-hide)
tmpfs on /run/user/1001 type tmpfs (rw,nosuid,nodev,relatime,size=2040796k,nr_inodes=510199,mode=700,uid=1001,gid=1001,inode64)
/dev/mapper/ubuntu--vg-ubuntu--lv on /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-base-config-drivers type ext4 (ro,relatime)

Incus log and info

outpost-001$ sudo cat /var/log/incus/incusd.log
time="2025-10-20T16:40:52Z" level=warning msg="Rejecting request from untrusted client" ip="172.16.170.147:52148"
time="2025-10-20T16:58:04Z" level=warning msg="The backing filesystem doesn't support quotas, skipping set quota" driver=dir path=/var/lib/incus/storage-pools/local/virtual-machines/win11test pool=local size=69243764736 volID=2
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T09:19:20Z" level=error msg="Failed writing error for HTTP response" err="http2: stream closed" url="/1.0/instances/{name}" writeErr="http2: stream closed"
time="2025-10-21T14:02:53Z" level=error msg="Failed to stop device" device=gpu1 err="Failed probing device \"0000:05:00.0\" via \"/sys/bus/pci/drivers_probe\": write /sys/bus/pci/drivers_probe: invalid argument" instance=win11test instanceType=virtual-machine project=default
time="2025-10-21T14:51:58Z" level=warning msg="Could not get VM state from agent" err="dial unix /run/incus/win11test/qemu.monitor: connect: connection refused" instance=win11test instanceType=virtual-machine project=default
time="2025-10-21T14:51:58Z" level=error msg="Failed to stop device" device=gpu1 err="Failed probing device \"0000:05:00.1\" via \"/sys/bus/pci/drivers_probe\": write /sys/bus/pci/drivers_probe: invalid argument" instance=win11test instanceType=virtual-machine project=default
time="2025-10-22T09:29:32Z" level=warning msg="Rejecting request from untrusted client" ip="172.16.170.147:37666"
outpost-001$ incus info
config:
  cluster.https_address: 172.16.170.147:8443
  core.https_address: 172.16.170.147:8443
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instance_oci
- clustering_groups_config
- instances_lxcfs_per_instance
- clustering_groups_vm_cpu_definition
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
- qemu_raw_qmp
- network_load_balancer_health_check
- oidc_scopes
- network_integrations_peer_name
- qemu_scriptlet
- instance_auto_restart
- storage_lvm_metadatasize
- ovn_nic_promiscuous
- ovn_nic_ip_address_none
- instances_state_os_info
- network_load_balancer_state
- instance_nic_macvlan_mode
- storage_lvm_cluster_create
- network_ovn_external_interfaces
- instances_scriptlet_get_instances_count
- cluster_rebalance
- custom_volume_refresh_exclude_older_snapshots
- storage_initial_owner
- storage_live_migration
- instance_console_screenshot
- image_import_alias
- authorization_scriptlet
- console_force
- network_ovn_state_addresses
- network_bridge_acl_devices
- instance_debug_memory
- init_preseed_storage_volumes
- init_preseed_profile_project
- instance_nic_routed_host_address
- instance_smbios11
- api_filtering_extended
- acme_dns01
- security_iommu
- network_ipv4_dhcp_routes
- network_state_ovn_ls
- network_dns_nameservers
- acme_http01_port
- network_ovn_ipv4_dhcp_expiry
- instance_state_cpu_time
- network_io_bus
- disk_io_bus_usb
- storage_driver_linstor
- instance_oci_entrypoint
- network_address_set
- server_logging
- network_forward_snat
- memory_hotplug
- instance_nic_routed_host_tables
- instance_publish_split
- init_preseed_certificates
- custom_volume_sftp
- network_ovn_external_nic_address
- network_physical_gateway_hwaddr
- backup_s3_upload
- snapshot_manual_expiry
- resources_cpu_address_sizes
- disk_attached
- limits_memory_hotplug
- disk_wwn
- server_logging_webhook
- storage_driver_truenas
- container_disk_tmpfs
- instance_limits_oom
- backup_override_config
- network_ovn_tunnels
- init_preseed_cluster_groups
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: outpost
auth_user_method: unix
environment:
  addresses:
  - 172.16.170.147:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    ............................................
    -----END CERTIFICATE-----
  certificate_fingerprint: ......................................
  driver: lxc | qemu
  driver_version: 6.0.5 | 10.1.1
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "true"
    unpriv_fscaps: "true"
  kernel_version: 6.8.0-85-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "24.04"
  project: default
  server: incus
  server_clustered: true
  server_event_mode: full-mesh
  server_name: outpost-001
  server_pid: 24791
  server_version: "6.17"
  storage: dir
  storage_version: "1"
  storage_supported_drivers:
  - name: truenas
    version: 0.7.3
    remote: true
  - name: btrfs
    version: 6.6.3
    remote: false
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.48.0
    remote: false

That error happening on a dir storage pool typically means something is pretty wrong with the server. You may want to look at dmesg for any storage related errors that would have caused the filesystem to get remounted read-only.

Thanks for your reply!

I have looked at dmesg and I cannot see any errors or anything else that stands out.
Nothing else on the system is read-only, everything else on the server works fine, this is the only thing that’s having a problem.

It’s always possible that I have missed something, but I do get the feeling that incus has left something in an inconsistent state. Especially since:

  • The second snapshot is in this mounted state, and the first one isn’t.
  • The second snapshot cannot be deleted, but the first one could be deleted.

Just saw that one now ^

Can you try umount /dev/mapper/ubuntu--vg-ubuntu--lv on /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-base-config-drivers to see if that succeeds and if that then allows for the snapshot to be correctly deleted?

Phew, that was easy, thank you so much!

I ran the following, and it worked fine.

$ sudo umount /var/lib/incus/storage-pools/local/virtual-machines-snapshots/win11test/clean-base-config-drivers
$ sudo incus snapshot delete win11test clean-base-config-drivers

A suggestion when testing, is to use a loopback filesystem instead of dir. dir has some performance issues compared to a loopback filesystem, and a loopback filesystem is mid-way option (suitable for testing) to using a proper dedicated partition/disk for Incus.

When you run sudo incus admin init, you are prompted to choose the storage backend. However, if there are no other packages installed to support a storage backend, you end up with dir. If you want to use zfs, you would need to first install zfsutils-linux and then run incus admin init.

Let’s have a look on a Ubuntu 24.04 LTS VM. First, we incus admin init without any supporting package for the storage backend. You are only prompted for the name of the storage pool and there is no prompt for the storage backend. The wizard takes you directly to the next section about networking. You are getting the dir storage backend.

$ incus launch --vm images:ubuntu/24.04/cloud incusserver
Launching incusserver
$ incus shell incusserver
root@incusserver:~# apt install incus
...
root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: ^C

Now we install the ZFS utility package (Ubuntu kernels already have ZFS support, and they are only missing the utility package). Still, not prompted, and the wizard takes you to the next section about networking. Perhaps a reboot is needed? Normally, a reboot is not needed but we will do it anyway.

root@incusserver:~# sudo apt install zfsutils-linux
...
root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: ^C

We reboot and now we are prompted for something other than dir. We do not use an existing empty block device as we are testing, and go for a loop device (a file) that is pre-allocated to the size we specify.

root@incusserver:~# logout
$ incus restart incusserver
$ incus shell incusserver
root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
...

If you have an installation already with dir and want to switch to zfs, you can do so using incus storage. You add another storage pool on zfs and configure Incus to use that one. Then, you may even remove the old dir storage pool.

Then, I tried again the same process, on a new VM with Ubuntu 24.04 LTS but installed Incus 6.17 from the Zabbly repository (whereas the default on Ubuntu 24.04 LTS is Incus 6.0.x).

The wizard is a bit different. Since there is now an additional option for truenas, you are prompted for either dir or truenas. It’s more verbose this way.

root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, truenas) [default=dir]: ^C

I installed zfsutils-linux and immediately tried to incus admin init. I am not prompted for zfs and I need to reboot to get that option.

root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, zfs, truenas) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: ^C
root@incusserver:~# 

IIRC, available storage drivers are only scanned at incusd startup. Restarting the incus service would be sufficient.

1 Like