Hi all, I’ve been attempting to run Fedora instances with the root disk set with zfs.delegate
but I am hard stuck on this error when attempting to start the instance after a restart:
Error: Failed to run: zfs zone /proc/35523/ns/user shuttle/incus/containers/workstations_fedora42: exit status 1 (cannot add 'shuttle/incus/containers/workstations_fedora42' to namespace: dataset already exists)
Try `incus info --show-log fedora42` for more info
and the output of incus info --show-log fedora42
is:
lxc workstations_fedora42 20250527190616.740 ERROR conf - ../src/lxc/conf.c:turn_into_dependent_mounts:3455 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing...
The following is the configuration of relevance (let me know if more diagnostics are needed or wanted) for the instance:
Profile
config:
limits.cpu: "2"
limits.memory: 4GiB
snapshots.expiry: 1m
snapshots.pattern: '{{ creation_date|date:''2006-01-02_15-04-05'' }}'
snapshots.schedule: '@hourly'
description: Default profile for Workstations
devices:
eth0:
name: eth0
network: br-workstations
type: nic
root:
path: /
pool: shuttle
type: disk
name: default
used_by:
- /1.0/instances/fedora42?project=workstations
- /1.0/instances/big-gibbon?project=workstations
- /1.0/instances/debian13?project=workstations
Storage pool configuration
config:
source: shuttle/incus
volatile.initial_source: shuttle/incus
volume.snapshots.expiry: 1m
volume.snapshots.pattern: '{{ creation_date|date:''2006-01-02_15-04-05'' }}'
volume.snapshots.schedule: '@hourly'
volume.zfs.delegate: "true"
zfs.pool_name: shuttle/incus
description: ""
name: shuttle
driver: zfs
used_by:
- /1.0/images/3481efd7cb3c85fd2215557165093d43f31cceca0cf497b630ba7b24af44b522?project=workstations
- /1.0/images/eb7ec38152a187619b1004f15f922a44cbbc11e46c6443bd119a6bd74854877f?project=workstations
- /1.0/images/fb0d71e5d06d57fa6962629023d9b285a42e2a7e9aecab918030939dd8645957?project=workstations
- /1.0/images/fd74e4363232c197b36eba71adb9260bda10c88eec3793e1a809b7682b6c06bd?project=workstations
- /1.0/instances/big-gibbon?project=workstations
- /1.0/instances/debian13?project=workstations
- /1.0/instances/fedora42?project=workstations
- /1.0/profiles/default?project=services
- /1.0/profiles/default?project=workstations
- /1.0/storage-pools/shuttle/volumes/image/d78cf18c8976d3348a7953698182e3b31f31d5c6c93e61a6e6e6af66b568b734
status: Created
locations:
- none
Instance configuration
architecture: x86_64
config:
cloud-init.user-data: |
#cloud-config
package_update: true
package_upgrade: true
packages:
- nano-default-editor
- bash-color-prompt
- ncurses
users:
- *omitted*
write_files:
- path: /etc/dnf/dnf.conf
content: |
[main]
fastestmirror=True
max_parallel_downloads=20
timeout=60
permissions: 0o644
owner: root:root
runcmd:
- [restorecon, -R, -v, /]
- [semanage, fcontext, -a, -t, bin_t, /var/run/incus_agent/incus-agent]
image.architecture: amd64
image.description: Fedora 42 amd64 (20250526_13:40)
image.os: Fedora
image.release: "42"
image.serial: "20250526_13:40"
image.type: squashfs
image.variant: cloud
volatile.base_image: d78cf18c8976d3348a7953698182e3b31f31d5c6c93e61a6e6e6af66b568b734
volatile.cloud-init.instance-id: 2a954139-11cb-4d4b-9f26-baa4dd6b39cb
volatile.eth0.hwaddr: 10:66:6a:da:8c:1e
volatile.idmap.base: "0"
volatile.idmap.current: '[]'
volatile.last_state.power: STOPPED
volatile.last_state.ready: "false"
volatile.uuid: 5d7847df-f589-4e38-947c-41cb9b76d97d
volatile.uuid.generation: 5d7847df-f589-4e38-947c-41cb9b76d97d
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""
Instance root disk
config:
snapshots.expiry: 1m
snapshots.pattern: '{{ creation_date|date:''2006-01-02_15-04-05'' }}'
snapshots.schedule: '@hourly'
zfs.delegate: "true"
description: ""
name: fedora42
type: container
used_by:
- /1.0/instances/fedora42?project=workstations
location: none
content_type: filesystem
project: workstations
created_at: 2025-05-26T07:04:13.764834189Z
incus info
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instance_oci
- clustering_groups_config
- instances_lxcfs_per_instance
- clustering_groups_vm_cpu_definition
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
- qemu_raw_qmp
- network_load_balancer_health_check
- oidc_scopes
- network_integrations_peer_name
- qemu_scriptlet
- instance_auto_restart
- storage_lvm_metadatasize
- ovn_nic_promiscuous
- ovn_nic_ip_address_none
- instances_state_os_info
- network_load_balancer_state
- instance_nic_macvlan_mode
- storage_lvm_cluster_create
- network_ovn_external_interfaces
- instances_scriptlet_get_instances_count
- cluster_rebalance
- custom_volume_refresh_exclude_older_snapshots
- storage_initial_owner
- storage_live_migration
- instance_console_screenshot
- image_import_alias
- authorization_scriptlet
- console_force
- network_ovn_state_addresses
- network_bridge_acl_devices
- instance_debug_memory
- init_preseed_storage_volumes
- init_preseed_profile_project
- instance_nic_routed_host_address
- instance_smbios11
- api_filtering_extended
- acme_dns01
- security_iommu
- network_ipv4_dhcp_routes
- network_state_ovn_ls
- network_dns_nameservers
- acme_http01_port
- network_ovn_ipv4_dhcp_expiry
- instance_state_cpu_time
- network_io_bus
- disk_io_bus_usb
- storage_driver_linstor
- instance_oci_entrypoint
- network_address_set
- server_logging
- network_forward_snat
- memory_hotplug
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: khu
auth_user_method: unix
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIICEDCCAZWgAwIBAgIQL3Smv/ToiWRFdqKTjWGxNzAKBggqhkjOPQQDAzA3MRkw
FwYDVQQKExBMaW51eCBDb250YWluZXJzMRowGAYDVQQDDBFyb290QGtodS1BRzE0
MjAyMjAeFw0yNTA1MjYwNjU4NDBaFw0zNTA1MjQwNjU4NDBaMDcxGTAXBgNVBAoT
EExpbnV4IENvbnRhaW5lcnMxGjAYBgNVBAMMEXJvb3RAa2h1LUFHMTQyMDIyMHYw
EAYHKoZIzj0CAQYFK4EEACIDYgAE6DpjcWT7rDcEfKJCCZGh7XEFXbCXR/OccrPC
VsaOVFPa/giQVH3O7EpxJo6uyB2HPbDN2MKkky8i/+ttBVdn3ysm86T1BUoanWpH
NC0MxZkhkHCw/G4fprfiZNHH7xMEo2YwZDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l
BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAvBgNVHREEKDAmggxraHUtQUcx
NDIwMjKHBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAEwCgYIKoZIzj0EAwMDaQAwZgIx
AJW+XVL3jBr1E2M/XnUxjcS+iYdcargWqK+RNdfpQ1RuRF1x0oJ0NGxnoVKbPw5z
qQIxAOFE/K3Yqjs4oihFG8pY/d/kWDJE3apKOfFeMX1o8qs4dbnRycXOABMPwqdQ
u95zBg==
-----END CERTIFICATE-----
certificate_fingerprint: a39e14428caed0884d44d65e08b1ccf8e8e78157f311fb283cf73d64e7b89050
driver: lxc | qemu
driver_version: 6.0.3 | 9.2.3
firewall: nftables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
idmapped_mounts: "true"
netnsid_getifaddrs: "true"
seccomp_listener: "true"
seccomp_listener_continue: "true"
uevent_injection: "true"
unpriv_binfmt: "true"
unpriv_fscaps: "true"
kernel_version: 6.14.6-300.fc42.x86_64
lxc_features:
cgroup2: "true"
core_scheduling: "true"
devpts_fd: "true"
idmapped_mounts_v2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: Fedora Linux
os_version: "42"
project: workstations
server: incus
server_clustered: false
server_event_mode: full-mesh
server_name: khu-AG142022
server_pid: 10301
server_version: "6.12"
storage: zfs
storage_version: 2.3.2-1
storage_supported_drivers:
- name: dir
version: "1"
remote: false
- name: lvm
version: 2.03.30(2) (2025-01-14) / 1.02.204 (2025-01-14) / 4.49.0
remote: false
- name: zfs
version: 2.3.2-1
remote: false
- name: btrfs
version: "6.14"
remote: false
- name: lvmcluster
version: 2.03.30(2) (2025-01-14) / 1.02.204 (2025-01-14) / 4.49.0
remote: true
The Fedora instance was created with the image on the LXC image server - specifically fedora/42/cloud
- and I followed the instructions outlined here: Fedora — OpenZFS documentation
As to why I did a module build, there wasn’t a separate package for just the ZFS utilities. The instance didn’t crash at this point and I was able to create, list, and delete datasets (I did not explore further). However, running zfs list
showed all the datasets on the host and running zfs get zoned
also did not list the root disk as zoned to the userns of the instance.
To make sure this wasn’t a case of PEBKAC/level 8 error, I spun up a Debian 13 instance using the Zabbly OpenZFS repository and performed an: apt install openzfs-zfsutils --no-install-recommends
and ZFS was able to correctly output the zoned datasets.
I did the same with a Debian 12 instance using the OpenZFS instructions listed here: Debian — OpenZFS documentation and it was able to correctly output the zoned datasets.
Given the behaviour listed, I’m curious if this is an upstream OpenZFS bug for RHEL-family systems.
Has anyone else encountered this issue?
EDIT: this may be unrelated, but another behaviour I’ve noticed when starting the Fedora instance would be that the entire system freezes/hangs for a few seconds with keyboard input being delayed and Pipewire (and I think wireplumber) crashing and restarting. This behaviour does not show up with the Debian instances