Lxc launch ubuntu:22.04 --vm fails when lxd uses ceph storage

I am using 3 vms with Ubuntu 22.04.2 LTS (Jammy Jellyfish).
Ceph installation is done exactly as it is done here. Ceph is used as a default storage when doing lxd init

Here is version of lxd and microceph:

root@serv1:~# snap list
Name       Version        Rev    Tracking       Publisher   Notes
core20     20230126       1822   latest/stable  canonical✓  base
core22     20230210       522    latest/stable  canonical✓  base
lxd        git-c5795a8    24561  latest/edge    canonical✓  -
microceph  0+git.6208776  220    latest/stable  canonical✓  -
snapd      2.58.2         18357  latest/stable  canonical✓  snapd
lxc info

root@serv1:~# lxc info
config:
images.auto_update_interval: “0”
api_extensions:

  • storage_zfs_remove_snapshots
  • container_host_shutdown_timeout
  • container_stop_priority
  • container_syscall_filtering
  • auth_pki
  • container_last_used_at
  • etag
  • patch
  • usb_devices
  • https_allowed_credentials
  • image_compression_algorithm
  • directory_manipulation
  • container_cpu_time
  • storage_zfs_use_refquota
  • storage_lvm_mount_options
  • network
  • profile_usedby
  • container_push
  • container_exec_recording
  • certificate_update
  • container_exec_signal_handling
  • gpu_devices
  • container_image_properties
  • migration_progress
  • id_map
  • network_firewall_filtering
  • network_routes
  • storage
  • file_delete
  • file_append
  • network_dhcp_expiry
  • storage_lvm_vg_rename
  • storage_lvm_thinpool_rename
  • network_vlan
  • image_create_aliases
  • container_stateless_copy
  • container_only_migration
  • storage_zfs_clone_copy
  • unix_device_rename
  • storage_lvm_use_thinpool
  • storage_rsync_bwlimit
  • network_vxlan_interface
  • storage_btrfs_mount_options
  • entity_description
  • image_force_refresh
  • storage_lvm_lv_resizing
  • id_map_base
  • file_symlinks
  • container_push_target
  • network_vlan_physical
  • storage_images_delete
  • container_edit_metadata
  • container_snapshot_stateful_migration
  • storage_driver_ceph
  • storage_ceph_user_name
  • resource_limits
  • storage_volatile_initial_source
  • storage_ceph_force_osd_reuse
  • storage_block_filesystem_btrfs
  • resources
  • kernel_limits
  • storage_api_volume_rename
  • macaroon_authentication
  • network_sriov
  • console
  • restrict_devlxd
  • migration_pre_copy
  • infiniband
  • maas_network
  • devlxd_events
  • proxy
  • network_dhcp_gateway
  • file_get_symlink
  • network_leases
  • unix_device_hotplug
  • storage_api_local_volume_handling
  • operation_description
  • clustering
  • event_lifecycle
  • storage_api_remote_volume_handling
  • nvidia_runtime
  • container_mount_propagation
  • container_backup
  • devlxd_images
  • container_local_cross_pool_handling
  • proxy_unix
  • proxy_udp
  • clustering_join
  • proxy_tcp_udp_multi_port_handling
  • network_state
  • proxy_unix_dac_properties
  • container_protection_delete
  • unix_priv_drop
  • pprof_http
  • proxy_haproxy_protocol
  • network_hwaddr
  • proxy_nat
  • network_nat_order
  • container_full
  • candid_authentication
  • backup_compression
  • candid_config
  • nvidia_runtime_config
  • storage_api_volume_snapshots
  • storage_unmapped
  • projects
  • candid_config_key
  • network_vxlan_ttl
  • container_incremental_copy
  • usb_optional_vendorid
  • snapshot_scheduling
  • snapshot_schedule_aliases
  • container_copy_project
  • clustering_server_address
  • clustering_image_replication
  • container_protection_shift
  • snapshot_expiry
  • container_backup_override_pool
  • snapshot_expiry_creation
  • network_leases_location
  • resources_cpu_socket
  • resources_gpu
  • resources_numa
  • kernel_features
  • id_map_current
  • event_location
  • storage_api_remote_volume_snapshots
  • network_nat_address
  • container_nic_routes
  • rbac
  • cluster_internal_copy
  • seccomp_notify
  • lxc_features
  • container_nic_ipvlan
  • network_vlan_sriov
  • storage_cephfs
  • container_nic_ipfilter
  • resources_v2
  • container_exec_user_group_cwd
  • container_syscall_intercept
  • container_disk_shift
  • storage_shifted
  • resources_infiniband
  • daemon_storage
  • instances
  • image_types
  • resources_disk_sata
  • clustering_roles
  • images_expiry
  • resources_network_firmware
  • backup_compression_algorithm
  • ceph_data_pool_name
  • container_syscall_intercept_mount
  • compression_squashfs
  • container_raw_mount
  • container_nic_routed
  • container_syscall_intercept_mount_fuse
  • container_disk_ceph
  • virtual-machines
  • image_profiles
  • clustering_architecture
  • resources_disk_id
  • storage_lvm_stripes
  • vm_boot_priority
  • unix_hotplug_devices
  • api_filtering
  • instance_nic_network
  • clustering_sizing
  • firewall_driver
  • projects_limits
  • container_syscall_intercept_hugetlbfs
  • limits_hugepages
  • container_nic_routed_gateway
  • projects_restrictions
  • custom_volume_snapshot_expiry
  • volume_snapshot_scheduling
  • trust_ca_certificates
  • snapshot_disk_usage
  • clustering_edit_roles
  • container_nic_routed_host_address
  • container_nic_ipvlan_gateway
  • resources_usb_pci
  • resources_cpu_threads_numa
  • resources_cpu_core_die
  • api_os
  • container_nic_routed_host_table
  • container_nic_ipvlan_host_table
  • container_nic_ipvlan_mode
  • resources_system
  • images_push_relay
  • network_dns_search
  • container_nic_routed_limits
  • instance_nic_bridged_vlan
  • network_state_bond_bridge
  • usedby_consistency
  • custom_block_volumes
  • clustering_failure_domains
  • resources_gpu_mdev
  • console_vga_type
  • projects_limits_disk
  • network_type_macvlan
  • network_type_sriov
  • container_syscall_intercept_bpf_devices
  • network_type_ovn
  • projects_networks
  • projects_networks_restricted_uplinks
  • custom_volume_backup
  • backup_override_name
  • storage_rsync_compression
  • network_type_physical
  • network_ovn_external_subnets
  • network_ovn_nat
  • network_ovn_external_routes_remove
  • tpm_device_type
  • storage_zfs_clone_copy_rebase
  • gpu_mdev
  • resources_pci_iommu
  • resources_network_usb
  • resources_disk_address
  • network_physical_ovn_ingress_mode
  • network_ovn_dhcp
  • network_physical_routes_anycast
  • projects_limits_instances
  • network_state_vlan
  • instance_nic_bridged_port_isolation
  • instance_bulk_state_change
  • network_gvrp
  • instance_pool_move
  • gpu_sriov
  • pci_device_type
  • storage_volume_state
  • network_acl
  • migration_stateful
  • disk_state_quota
  • storage_ceph_features
  • projects_compression
  • projects_images_remote_cache_expiry
  • certificate_project
  • network_ovn_acl
  • projects_images_auto_update
  • projects_restricted_cluster_target
  • images_default_architecture
  • network_ovn_acl_defaults
  • gpu_mig
  • project_usage
  • network_bridge_acl
  • warnings
  • projects_restricted_backups_and_snapshots
  • clustering_join_token
  • clustering_description
  • server_trusted_proxy
  • clustering_update_cert
  • storage_api_project
  • server_instance_driver_operational
  • server_supported_storage_drivers
  • event_lifecycle_requestor_address
  • resources_gpu_usb
  • clustering_evacuation
  • network_ovn_nat_address
  • network_bgp
  • network_forward
  • custom_volume_refresh
  • network_counters_errors_dropped
  • metrics
  • image_source_project
  • clustering_config
  • network_peer
  • linux_sysctl
  • network_dns
  • ovn_nic_acceleration
  • certificate_self_renewal
  • instance_project_move
  • storage_volume_project_move
  • cloud_init
  • network_dns_nat
  • database_leader
  • instance_all_projects
  • clustering_groups
  • ceph_rbd_du
  • instance_get_full
  • qemu_metrics
  • gpu_mig_uuid
  • event_project
  • clustering_evacuation_live
  • instance_allow_inconsistent_copy
  • network_state_ovn
  • storage_volume_api_filtering
  • image_restrictions
  • storage_zfs_export
  • network_dns_records
  • storage_zfs_reserve_space
  • network_acl_log
  • storage_zfs_blocksize
  • metrics_cpu_seconds
  • instance_snapshot_never
  • certificate_token
  • instance_nic_routed_neighbor_probe
  • event_hub
  • agent_nic_config
  • projects_restricted_intercept
  • metrics_authentication
  • images_target_project
  • cluster_migration_inconsistent_copy
  • cluster_ovn_chassis
  • container_syscall_intercept_sched_setscheduler
  • storage_lvm_thinpool_metadata_size
  • storage_volume_state_total
  • instance_file_head
  • instances_nic_host_name
  • image_copy_profile
  • container_syscall_intercept_sysinfo
  • clustering_evacuation_mode
  • resources_pci_vpd
  • qemu_raw_conf
  • storage_cephfs_fscache
  • network_load_balancer
  • vsock_api
  • instance_ready_state
  • network_bgp_holdtime
  • storage_volumes_all_projects
  • metrics_memory_oom_total
  • storage_buckets
  • storage_buckets_create_credentials
  • metrics_cpu_effective_total
  • projects_networks_restricted_access
  • storage_buckets_local
  • loki
  • acme
  • internal_metrics
  • cluster_join_token_expiry
  • remote_token_expiry
  • init_preseed
  • storage_volumes_created_at
  • cpu_hotplug
  • projects_networks_zones
  • network_txqueuelen
  • cluster_member_state
  • instances_placement_scriptlet
  • storage_pool_source_wipe
  • zfs_block_mode
    api_status: stable
    api_version: “1.0”
    auth: trusted
    public: false
    auth_methods:
  • tls
    environment:
    addresses: []
    architectures:
    • x86_64
    • i686
      certificate: |
      -----BEGIN CERTIFICATE-----
      MIICAjCCAYegAwIBAgIRAJch9jNAyrrvf0Cmz5SQm+kwCgYIKoZIzj0EAwMwMzEc
      MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzETMBEGA1UEAwwKcm9vdEBzZXJ2
      MTAeFw0yMzAzMDMxNzA3NDRaFw0zMzAyMjgxNzA3NDRaMDMxHDAaBgNVBAoTE2xp
      bnV4Y29udGFpbmVycy5vcmcxEzARBgNVBAMMCnJvb3RAc2VydjEwdjAQBgcqhkjO
      PQIBBgUrgQQAIgNiAAS82IAZCitu1VDF8Y/v3uehbW6f1c4WFs6RiJ7Q/pPc37qm
      uOQJahziafQl6vbktbweD0+s6goPyNlyPFe4Zsq/rZXwlA4V9HqKA3ObWuFpBtPh
      iTWYP6mlf/odkX8wQ3GjXzBdMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggr
      BgEFBQcDATAMBgNVHRMBAf8EAjAAMCgGA1UdEQQhMB+CBXNlcnYxhwR/AAABhxAA
      AAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2kAMGYCMQDBWsTB2pQGHkf0aI3C
      fVScJMXl33TLyMbGUsHBRjS9TcX+SxET6eq1s+UCkN8Z/XECMQDHYcE6fV+2pe4m
      ddzStPY6hma/Ojrm7CUJTpibc2uSoQ2h0Wxj32ACV9skQzvf9tA=
      -----END CERTIFICATE-----
      certificate_fingerprint: eea788ce0739ded015620c2b822c701900c720870e291f5fb8b6fc4c84239434
      driver: lxc | qemu
      driver_version: 5.0.0 | 7.1.0
      firewall: nftables
      kernel: Linux
      kernel_architecture: x86_64
      kernel_features:
      idmapped_mounts: “true”
      netnsid_getifaddrs: “true”
      seccomp_listener: “true”
      seccomp_listener_continue: “true”
      shiftfs: “false”
      uevent_injection: “true”
      unpriv_fscaps: “true”
      kernel_version: 5.15.0-1028-kvm
      lxc_features:
      cgroup2: “true”
      core_scheduling: “true”
      devpts_fd: “true”
      idmapped_mounts_v2: “true”
      mount_injection_file: “true”
      network_gateway_device_route: “true”
      network_ipvlan: “true”
      network_l2proxy: “true”
      network_phys_macvlan_mtu: “true”
      network_veth_router: “true”
      pidfd: “true”
      seccomp_allow_deny_syntax: “true”
      seccomp_notify: “true”
      seccomp_proxy_send_notify_fd: “true”
      os_name: Ubuntu
      os_version: “22.04”
      project: default
      server: lxd
      server_clustered: false
      server_event_mode: full-mesh
      server_name: serv1
      server_pid: 4383
      server_version: “5.11”
      storage: ceph
      storage_version: 17.2.0
      storage_supported_drivers:
    • name: btrfs
      version: 5.16.2
      remote: false
    • name: ceph
      version: 17.2.0
      remote: true
    • name: cephfs
      version: 17.2.0
      remote: true
    • name: cephobject
      version: 17.2.0
      remote: true
    • name: dir
      version: “1”
      remote: false
    • name: lvm
      version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.45.0
      remote: false
    • name: zfs
      version: 2.1.4-0ubuntu0.1
      remote: false

lxc storage list:

+---------+--------+--------+-------------+---------+---------+
|  NAME   | DRIVER | SOURCE | DESCRIPTION | USED BY |  STATE  |
+---------+--------+--------+-------------+---------+---------+
| default | ceph   | lxd    |             | 1       | CREATED |
+---------+--------+--------+-------------+---------+---------+

Here is a problem itself:

root@serv1:~# lxc launch ubuntu:22.04 --vm
Creating the instance
Instance name is: adapting-mantis             
Starting adapting-mantis
Error: Failed setting up device via monitor: Failed adding block device for disk device "root": Failed adding block device: error reading conf file /etc/ceph/ceph.conf: No such file or directory
Try `lxc info --show-log local:adapting-mantis` for more info
root@serv1:~# lxc info --show-log local:adapting-mantis
Name: adapting-mantis
Status: STOPPED
Type: virtual-machine
Architecture: x86_64
Created: 2023/03/03 17:12 UTC

Log:

warning: tap: open vhost char device failed: No such file or directory
warning: tap: open vhost char device failed: No such file or directory

I faced up with the same problem while doing configuration exactly as in here on Ubuntu:22.04 and Ubuntu:20.04

I think that using this -kvm kernel flavor is why VHOST isn’t usable. Please see Bug #1980122 “CONFIG_VHOST_NET is missing from the `linux-kvm` f...” : Bugs : linux-meta-kvm package : Ubuntu

hi Viktor,

I am facing exactly the same issue right now(also have the same setup and snap versions), although in my case the containers sem to work. Have you come up with any solution yet?

@stgraber is it possible that using the edge version of microceph is causing the issue?

Regards

Mateusz

With all kernels except kvm I faced up with following error while booting vm:

[    1.051813] VFS: Cannot open root device "PARTUUID=4f2db25b-897b-4eff-9cc2-0aa17f8e4b4b" or unknown-block(0,0): error -6

Also error

error reading conf file /etc/ceph/ceph.conf: No such file or directory

seems not connected to vhost errors.

I was able to boot with generic kernel and the error in lxc info changed to:

warning: tap: open vhost char device failed: Permission denied

Then I look through dmesg and have found following record, related to topic:

[  228.176709] audit: type=1400 audit(1678041471.655:71): apparmor="DENIED" operation="open" class="file" profile="lxd-adapting-mantis_</var/snap/lxd/common/lxd>" name="/var/snap/microceph/220/conf/ceph.conf" pid=2388 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=999 ouid=0

I will prepare pull request to fix AppArmor rules for lxd instance profile.

@stgraber that LXD QEMU trying to read microceph’s ceph.conf might be worth looking into.

Yes, I have already created pull request to fix it.

3 Likes

It seems to resolve the issu, thanks @Viktor-Yakovchuk

2 Likes