Migrated a container: "Failed to mount API filesystems"

Hello,

Required information

Source server:

  • Distribution: Ubuntu
  • Distribution version: 20.04.1 LTS
  • The output of “lxc info” or if that fails:
root@srv-3:~ # lxc info
config:
  core.https_address: '[::]:8443'
  core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- resources_system
- usedby_consistency
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- storage_rsync_compression
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_state_vlan
- gpu_sriov
- migration_stateful
- disk_state_quota
- storage_ceph_features
- gpu_mig
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 10.30.1.3:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICCjCCAZCgAwIBAgIRAMA8VkaeDAaD7e/k7xc+4+cwCgYIKoZIzj0EAwMwNjEc
    MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEWMBQGA1UEAwwNcm9vdEBhY3Nz
    cnYtMzAeFw0yMDA5MTAxNjU0MjRaFw0zMDA5MDgxNjU0MjRaMDYxHDAaBgNVBAoT
    E2xpbnV4Y29udGFpbmVycy5vcmcxFjAUBgNVBAMMDXJvb3RAYWNzc3J2LTMwdjAQ
    BgcqhkjOPQIBBgUrgQQAIgNiAAQbuAjS//JSA+F7BPv8DTKMhzL2bJAxvckPoWwD
    xN19fRCHKImYPZrJ58P7j/DLpmTH8iDF2sq+TCQpgQgWuBdjNG3eITH9UBtPbfLz
    sS1J9sm5727bHkw9dmLmAF/IpdejYjBgMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUE
    DDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCsGA1UdEQQkMCKCCGFjc3Nydi0z
    hwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMF9DO9iG
    TzatMfg6G1c93euJu2hpiawgrHMzdNLii8L8paoLE5DHDZnWVS0dacD3jAIxALqA
    qajpEQHfLPA1CrAZwFtYBMmMVQGLUTZMd4pmJOL4uVg058KXq9AFFBLHAATEvw==
    -----END CERTIFICATE-----
  certificate_fingerprint: 7e0c9b06931dc92f123f35309ab88a6661ddd2d2d387f2a60393e5426a41965a
  driver: lxc | qemu
  driver_version: 4.0.10 | 5.2.0
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.4.0-47-generic
  lxc_features:
    cgroup2: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "20.04"
  project: default
  server: lxd
  server_clustered: false
  server_name: srv-3
  server_pid: 8813
  server_version: 4.0.7
  storage: zfs
  storage_version: 0.8.3-1ubuntu12.2
  storage_supported_drivers:
  - name: btrfs
    version: 4.15.1
    remote: false
  - name: cephfs
    version: 12.2.13
    remote: true
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.02.176(2) (2017-11-03) / 1.02.145 (2017-11-03) / 4.41.0
    remote: false
  - name: zfs
    version: 0.8.3-1ubuntu12.2
    remote: false
  - name: ceph
    version: 12.2.13
    remote: true

Destination server:

  • Distribution: Debian
  • Distribution version: 11 (bullseye)
  • The output of “lxc info” or if that fails:
root@srv-13:~ # lxc info
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
config:
  core.https_address: '[::]:8443'
  core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- resources_system
- usedby_consistency
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- storage_rsync_compression
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_state_vlan
- gpu_sriov
- migration_stateful
- disk_state_quota
- storage_ceph_features
- gpu_mig
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 10.30.1.26:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICDTCCAZOgAwIBAgIRAMmFBGqrZhu2F6wJNeMplH8wCgYIKoZIzj0EAwMwNzEc
    MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEXMBUGA1UEAwwOcm9vdEBhY3Nz
    cnYtMTMwHhcNMjEwOTA3MTUwMDMwWhcNMzEwOTA1MTUwMDMwWjA3MRwwGgYDVQQK
    ExNsaW51eGNvbnRhaW5lcnMub3JnMRcwFQYDVQQDDA5yb290QGFjc3Nydi0xMzB2
    MBAGByqGSM49AgEGBSuBBAAiA2IABK5J7WGmN3Tu0imNgCz5RPfjFkIAIAQrHATo
    xFYwsA2vYnWQhpCglk2fqJ5VWovMwN0rftzSVyES3BrZ5T6UbfQNOjtuVEarIz6I
    lbu9BC4pug4vQGq0kALgq7hg+nQ8FKNjMGEwDgYDVR0PAQH/BAQDAgWgMBMGA1Ud
    JQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwLAYDVR0RBCUwI4IJYWNzc3J2
    LTEzhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMQDf
    UR384BSFs/dFdZWjZhd85DdxYweWJbzTp1c612+Bcfph3rygaVqKJFJbPqVsdLQC
    MCeO1ih333DmiBirs3OgB3EYSr/1RaIFwU2Ebk2f7JNmZpYqqsiDXkAGHau1C7UN
    qA==
    -----END CERTIFICATE-----
  certificate_fingerprint: d5f5e6a6472ea7c6f9d24ac97eb9e381665b48d2795eaee892b75268869f68b0
  driver: qemu | lxc
  driver_version: 5.2.0 | 4.0.10
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.10.0-8-amd64
  lxc_features:
    cgroup2: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: "11"
  project: default
  server: lxd
  server_clustered: false
  server_name: srv-13
  server_pid: 1543
  server_version: 4.0.7
  storage: zfs
  storage_version: 2.0.3-9
  storage_supported_drivers:
  - name: lvm
    version: 2.02.176(2) (2017-11-03) / 1.02.145 (2017-11-03) / 4.43.0
    remote: false
  - name: zfs
    version: 2.0.3-9
    remote: false
  - name: ceph
    version: 12.2.13
    remote: true
  - name: btrfs
    version: 4.15.1
    remote: false
  - name: cephfs
    version: 12.2.13
    remote: true
  - name: dir
    version: "1"
    remote: false

Issue description

I tried to move a container from my original server to another destination, either by doing a lxc copy or by doing a zfs send. But every time it returns a mount error, a systemd problem, and almost no services seem to be running on the container.

Steps to reproduce

  1. Perform a snapshot
  2. Migrate the container to another server with a lxc copy or a zfs send of the dataset
  3. Turn on the container

Information to attach

  • [ ] Container configuration (lxc config show NAME --expanded)
root@srv-13:/var/snap/lxd/common/lxd/logs/containername # lxc config show containername --expanded
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
architecture: x86_64
config:
  limits.memory: 4GB
  security.privileged: "true"
  volatile.base_image: 87a1f1c305615024124b238c0fcbed99c11193cab4bbc5c340ca2500c99b1b1f
  volatile.eth0.hwaddr: 00:16:3e:da:7f:51
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
  volatile.uuid: 52719aa7-7b7c-427c-9dd1-635ffea1953c
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
  sdb:
    path: /srv/samba/nas
    source: /srv/data/containername-int-fr/nas
    type: disk
  sdc:
    path: /srv/samba/users
    source: /srv/data/containername-int-fr/users
    type: disk
ephemeral: false
profiles:
- internet
- mem-4GB
stateful: false
description: ""
  • [ ] Main daemon log (at /var/log/lxd/lxd.log or /var/snap/lxd/common/lxd/logs/lxd.log)
t=2021-09-14T22:44:32+0200 lvl=info msg="Starting container" action=start created=2021-09-14T22:27:42+0200 ephemeral=false instance=containername instanceType=container project=default stateful=false used=2021-09-14T22:38:33+0200
t=2021-09-14T22:44:32+0200 lvl=info msg="Started container" action=start created=2021-09-14T22:27:42+0200 ephemeral=false instance=containername instanceType=container project=default stateful=false used=2021-09-14T22:38:33+0200
t=2021-09-14T22:44:52+0200 lvl=warn msg="Detected poll(POLLNVAL) event."
t=2021-09-14T22:44:54+0200 lvl=info msg="Stopping container" action=stop created=2021-09-14T22:27:42+0200 ephemeral=false instance=containername instanceType=container project=default stateful=false used=2021-09-14T22:44:32+0200
t=2021-09-14T22:44:55+0200 lvl=info msg="Stopped container" action=stop created=2021-09-14T22:27:42+0200 ephemeral=false instance=containername instanceType=container project=default stateful=false used=2021-09-14T22:44:32+0200

  • [ ] Container logs:

console.log:

Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[^[[0;1;31m!!!!!!^[[0m] Failed to mount API filesystems, freezing.
Freezing execution.

lxc.log:

lxc containername 20210914204432.731 ERROR    conf - conf.c:turn_into_dependent_mounts:3724 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing...
  • [ ] systemctl status networking into the container:
Failed to connect to bus: No such file or directory

Your host system is running cgroup2 and your container apparently only supports cgroup1 so systemd fails immediately and the container dies.

What OS is in the container?

If it’s critical that this keeps working for you, it may be easiest to switch your host system back to cgroup1.

My container is an Ubuntu 16.04. I was planning to upgrade it to the latest LTS but I don’t know if it will make any difference to the cgroup?
Is it not possible to make my container cgroup2 compatible instead?

Upgrading the container from 16.04 to 18.04 has a very good chance to fix this issue as I’d expect systemd in 18.04 to have cgroup2 support.

You could test this by creating a few test containers on the target system:

  • lxc launch images:ubuntu/16.04 u16
  • lxc launch images:ubuntu/18.04 u18
  • lxc launch images:ubuntu/20.04 u20

I’d expect u16 to fail the same as your existing container with the other two working properly. If that’s the case, then upgrading from 16.04 to 18.04 in that container should do the trick.

Thanks for your feedback, indeed after upgrading to 18.04 I was able to migrate the containers to the new server.

Excellent!