Error Failed to retrieve PID of executing child process

Hi
I installed lxd in opensuse 15.3. and created a container.
I try to enter him but I can’t.
After executing the command lxc exec container bash I get a message
Error: Failed to retrieve PID of executing child process
I used the guide on this page: https://en.opensuse.org/LXD

My environment:
lxc info
config: {}
api_extensions:

  • storage_zfs_remove_snapshots
  • container_host_shutdown_timeout
  • container_stop_priority
  • container_syscall_filtering
  • auth_pki
  • container_last_used_at
  • etag
  • patch
  • usb_devices
  • https_allowed_credentials
  • image_compression_algorithm
  • directory_manipulation
  • container_cpu_time
  • storage_zfs_use_refquota
  • storage_lvm_mount_options
  • network
  • profile_usedby
  • container_push
  • container_exec_recording
  • certificate_update
  • container_exec_signal_handling
  • gpu_devices
  • container_image_properties
  • migration_progress
  • id_map
  • network_firewall_filtering
  • network_routes
  • storage
  • file_delete
  • file_append
  • network_dhcp_expiry
  • storage_lvm_vg_rename
  • storage_lvm_thinpool_rename
  • network_vlan
  • image_create_aliases
  • container_stateless_copy
  • container_only_migration
  • storage_zfs_clone_copy
  • unix_device_rename
  • storage_lvm_use_thinpool
  • storage_rsync_bwlimit
  • network_vxlan_interface
  • storage_btrfs_mount_options
  • entity_description
  • image_force_refresh
  • storage_lvm_lv_resizing
  • id_map_base
  • file_symlinks
  • container_push_target
  • network_vlan_physical
  • storage_images_delete
  • container_edit_metadata
  • container_snapshot_stateful_migration
  • storage_driver_ceph
  • storage_ceph_user_name
  • resource_limits
  • storage_volatile_initial_source
  • storage_ceph_force_osd_reuse
  • storage_block_filesystem_btrfs
  • resources
  • kernel_limits
  • storage_api_volume_rename
  • macaroon_authentication
  • network_sriov
  • console
  • restrict_devlxd
  • migration_pre_copy
  • infiniband
  • maas_network
  • devlxd_events
  • proxy
  • network_dhcp_gateway
  • file_get_symlink
  • network_leases
  • unix_device_hotplug
  • storage_api_local_volume_handling
  • operation_description
  • clustering
  • event_lifecycle
  • storage_api_remote_volume_handling
  • nvidia_runtime
  • container_mount_propagation
  • container_backup
  • devlxd_images
  • container_local_cross_pool_handling
  • proxy_unix
  • proxy_udp
  • clustering_join
  • proxy_tcp_udp_multi_port_handling
  • network_state
  • proxy_unix_dac_properties
  • container_protection_delete
  • unix_priv_drop
  • pprof_http
  • proxy_haproxy_protocol
  • network_hwaddr
  • proxy_nat
  • network_nat_order
  • container_full
  • candid_authentication
  • backup_compression
  • candid_config
  • nvidia_runtime_config
  • storage_api_volume_snapshots
  • storage_unmapped
  • projects
  • candid_config_key
  • network_vxlan_ttl
  • container_incremental_copy
  • usb_optional_vendorid
  • snapshot_scheduling
  • snapshot_schedule_aliases
  • container_copy_project
  • clustering_server_address
  • clustering_image_replication
  • container_protection_shift
  • snapshot_expiry
  • container_backup_override_pool
  • snapshot_expiry_creation
  • network_leases_location
  • resources_cpu_socket
  • resources_gpu
  • resources_numa
  • kernel_features
  • id_map_current
  • event_location
  • storage_api_remote_volume_snapshots
  • network_nat_address
  • container_nic_routes
  • rbac
  • cluster_internal_copy
  • seccomp_notify
  • lxc_features
  • container_nic_ipvlan
  • network_vlan_sriov
  • storage_cephfs
  • container_nic_ipfilter
  • resources_v2
  • container_exec_user_group_cwd
  • container_syscall_intercept
  • container_disk_shift
  • storage_shifted
  • resources_infiniband
  • daemon_storage
  • instances
  • image_types
  • resources_disk_sata
  • clustering_roles
  • images_expiry
  • resources_network_firmware
  • backup_compression_algorithm
  • ceph_data_pool_name
  • container_syscall_intercept_mount
  • compression_squashfs
  • container_raw_mount
  • container_nic_routed
  • container_syscall_intercept_mount_fuse
  • container_disk_ceph
  • virtual-machines
  • image_profiles
  • clustering_architecture
  • resources_disk_id
  • storage_lvm_stripes
  • vm_boot_priority
  • unix_hotplug_devices
  • api_filtering
  • instance_nic_network
  • clustering_sizing
  • firewall_driver
  • projects_limits
  • container_syscall_intercept_hugetlbfs
  • limits_hugepages
  • container_nic_routed_gateway
  • projects_restrictions
  • custom_volume_snapshot_expiry
  • volume_snapshot_scheduling
  • trust_ca_certificates
  • snapshot_disk_usage
  • clustering_edit_roles
  • container_nic_routed_host_address
  • container_nic_ipvlan_gateway
  • resources_usb_pci
  • resources_cpu_threads_numa
  • resources_cpu_core_die
  • api_os
  • container_nic_routed_host_table
  • container_nic_ipvlan_host_table
  • container_nic_ipvlan_mode
  • resources_system
  • images_push_relay
  • network_dns_search
  • container_nic_routed_limits
  • instance_nic_bridged_vlan
  • network_state_bond_bridge
  • usedby_consistency
  • custom_block_volumes
  • clustering_failure_domains
  • resources_gpu_mdev
  • console_vga_type
  • projects_limits_disk
  • network_type_macvlan
  • network_type_sriov
  • container_syscall_intercept_bpf_devices
  • network_type_ovn
  • projects_networks
  • projects_networks_restricted_uplinks
  • custom_volume_backup
  • backup_override_name
  • storage_rsync_compression
  • network_type_physical
  • network_ovn_external_subnets
  • network_ovn_nat
  • network_ovn_external_routes_remove
  • tpm_device_type
  • storage_zfs_clone_copy_rebase
  • gpu_mdev
  • resources_pci_iommu
  • resources_network_usb
  • resources_disk_address
  • network_physical_ovn_ingress_mode
  • network_ovn_dhcp
  • network_physical_routes_anycast
  • projects_limits_instances
  • network_state_vlan
  • instance_nic_bridged_port_isolation
  • instance_bulk_state_change
  • network_gvrp
  • instance_pool_move
  • gpu_sriov
  • pci_device_type
  • storage_volume_state
  • network_acl
  • migration_stateful
  • disk_state_quota
  • storage_ceph_features
  • projects_compression
  • projects_images_remote_cache_expiry
  • certificate_project
  • network_ovn_acl
  • projects_images_auto_update
  • projects_restricted_cluster_target
  • images_default_architecture
  • network_ovn_acl_defaults
  • gpu_mig
  • project_usage
    api_status: stable
    api_version: “1.0”
    auth: trusted
    public: false
    auth_methods:
  • tls
    environment:
    addresses: []
    architectures:
    • x86_64
    • i686
      certificate: |
      -----BEGIN CERTIFICATE-----
      MIICGTCCAZ6gAwIBAgIQAUvg1ReNlZoZSS4PEE6BpTAKBggqhkjOPQQDAzA7MRww
      GgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRswGQYDVQQDDBJyb290QGhwLnNh
      dGthcy5sYWIwHhcNMjEwNTE0MTI0NjMxWhcNMzEwNTEyMTI0NjMxWjA7MRwwGgYD
      VQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRswGQYDVQQDDBJyb290QGhwLnNhdGth
      cy5sYWIwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATrmiTmA9jWtSDGc3loncrVPfvz
      NxTknsY/ptTFGRP22v2pbx7KYdKLb7I06HrvP6XoxTf6uLU7XlHq9YJ9mYBwDy71
      wvTbzsNmFpIV2bgYv77Bn5jwResYfCHEB/FIkNOjZzBlMA4GA1UdDwEB/wQEAwIF
      oDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMDAGA1UdEQQpMCeC
      DWhwLnNhdGthcy5sYWKHBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAEwCgYIKoZIzj0E
      AwMDaQAwZgIxAL1KWtjKCht9mzMasG+sbcBDIiW1Sq1KwQ+18S6Bz82mrZ8r2Jcp
      SH+uVDTGO4aEYgIxAO3wrNee7rCiuUTOBWOaemLBDIhwwbx67hSdEuL22JhYy+EL
      eSYRWRIUBHyp3HzoWA==
      -----END CERTIFICATE-----
      certificate_fingerprint: 9c20b1988ad18a897f21bc353759287b5c44728a0ced2af0335d3730609213bd
      driver: lxc | qemu
      driver_version: 4.0.5 | 5.2.0
      firewall: nftables
      kernel: Linux
      kernel_architecture: x86_64
      kernel_features:
      netnsid_getifaddrs: “true”
      seccomp_listener: “true”
      seccomp_listener_continue: “false”
      shiftfs: “false”
      uevent_injection: “true”
      unpriv_fscaps: “true”
      kernel_version: 5.3.18-57-default
      lxc_features:
      cgroup2: “true”
      devpts_fd: “true”
      mount_injection_file: “true”
      network_gateway_device_route: “true”
      network_ipvlan: “true”
      network_l2proxy: “true”
      network_phys_macvlan_mtu: “true”
      network_veth_router: “true”
      pidfd: “true”
      seccomp_allow_deny_syntax: “true”
      seccomp_notify: “true”
      seccomp_proxy_send_notify_fd: “true”
      os_name: openSUSE Leap
      os_version: “15.3”
      project: default
      server: lxd
      server_clustered: false
      server_name: hp.satkas.lab
      server_pid: 1719
      server_version: “4.13”
      storage: dir
      storage_version: “1”

I also installed lxd from snaps and cannot enter the instance (lxc exec Ubuntu bash). The same problem. Where to find the problem?

Any ideas @brauner? Thanks

I would need to see the trace log of such a container right after a lxc exec attempt. So you need to switch the daemon into debug and verbose mode by passing --debug --verbose to LXD at startup.

The LXC version 4.0.5 is also outdated and I think we hade a bug in there preventing us from attaching so if you could upgrade that would probably fix the issue for you. 4.0.9 most likely will fix it.

I entered via
lxc console Ubuntu
Previously in the file
sudo vi / var / snap / lxd / common / lxd / storage-pools / default / containers / Ubuntu / rootfs / etc / shadow
I have removed the !
I provide logs of the entire process

tk@hp:~> lxc console Ubuntu
To detach from the console, press: +a q

[** ] A start job is running for Wait for…e Configured (1min 26s / no limit)
[FAILED] Failed to start Wait for Network to be Configured.
See ‘systemctl status systemd-networkd-wait-online.service’ for details.
Starting Initial cloud-init job (metadata service crawler)…
[ 2229.720856] cloud-init[170]: Cloud-init v. 21.1-19-gbad84ad4-0ubuntu1~20.04.2 running ‘init’ at Mon, 17 May 2021 09:26:11 +0000. Up 121.93 seconds.
[ 2229.721098] cloud-init[170]: ci-info: ++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++
[ 2229.721207] cloud-init[170]: ci-info: ±-------±-----±----------------------------±----------±------±------------------+
[ 2229.721301] cloud-init[170]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
[ 2229.721376] cloud-init[170]: ci-info: ±-------±-----±----------------------------±----------±------±------------------+
[ 2229.721438] cloud-init[170]: ci-info: | eth0 | True | fe80::216:3eff:fea7:1f21/64 | . | link | 00:16:3e:a7:1f:21 |
[ 2229.721499] cloud-init[170]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . |
[ 2229.721572] cloud-init[170]: ci-info: | lo | True | ::1/128 | . | host | . |
[ 2229.721650] cloud-init[170]: ci-info: ±-------±-----±----------------------------±----------±------±------------------+
[ 2229.721714] cloud-init[170]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
[ 2229.721773] cloud-init[170]: ci-info: ±------±------------±--------±----------±------+
[ 2229.721842] cloud-init[170]: ci-info: | Route | Destination | Gateway | Interface | Flags |
[ 2229.721912] cloud-init[170]: ci-info: ±------±------------±--------±----------±------+
[ 2229.721983] cloud-init[170]: ci-info: | 0 | fe80::/64 | :: | eth0 | U |
[ 2229.722044] cloud-init[170]: ci-info: | 2 | local | :: | eth0 | U |
[ 2229.722101] cloud-init[170]: ci-info: | 3 | ff00::/8 | :: | eth0 | U |
[ 2229.722171] cloud-init[170]: ci-info: ±------±------------±--------±----------±------+
[ OK ] Finished Initial cloud-init job (metadata service crawler).
[ OK ] Reached target Cloud-config availability.
[ OK ] Reached target Network is Online.
[ OK ] Reached target System Initialization.
[ OK ] Started Daily apt download activities.
[ OK ] Started Daily apt upgrade and clean activities.
[ OK ] Started Periodic ext4 Online Metadata Check for All Filesystems.
[ OK ] Started Refresh fwupd metadata regularly.
[ OK ] Started Daily rotation of log files.
[ OK ] Started Daily man-db regeneration.
[ OK ] Started Message of the Day.
[ OK ] Started Daily Cleanup of Temporary Directories.
[ OK ] Reached target Paths.
[ OK ] Reached target Timers.
[ OK ] Listening on Unix socket for apport crash forwarding.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Listening on Open-iSCSI iscsid Socket.
[ OK ] Listening on Socket unix for snap application lxd.daemon.
Starting Socket activation for snappy daemon.
[ OK ] Listening on UUID daemon activation socket.
[ OK ] Reached target Remote File Systems (Pre).
[ OK ] Reached target Remote File Systems.
Starting Availability of block devices…
[ OK ] Listening on Socket activation for snappy daemon.
[ OK ] Reached target Sockets.
[ OK ] Reached target Basic System.
Starting Accounts Service…
Starting LSB: automatic crash report generation…
Starting Deferred execution scheduler…
[ OK ] Started Regular background program processing daemon.
[ OK ] Started D-Bus System Message Bus.
[ OK ] Started Save initial kernel messages after boot.
Starting Remove Stale Online ext4 Metadata Check Snapshots…
Starting Dispatcher daemon for systemd-networkd…
Starting System Logging Service…
Starting Service for snap application lxd.activate…
Starting Snap Daemon…
Starting OpenBSD Secure Shell server…
Starting Login Service…
Starting Permit User Sessions…
[ OK ] Finished Availability of block devices.
[ OK ] Started Deferred execution scheduler.
[ OK ] Started System Logging Service.
[ OK ] Finished Permit User Sessions.
Starting Hold until boot process finishes up…
Starting Terminate Plymouth Boot Screen…
[ OK ] Finished Hold until boot process finishes up.
[ OK ] Started Console Getty.
[ OK ] Created slice system-getty.slice.
[ OK ] Reached target Login Prompts.
[ OK ] Finished Terminate Plymouth Boot Screen.
[ OK ] Started LSB: automatic crash report generation.
[ OK ] Started OpenBSD Secure Shell server.
[ OK ] Finished Remove Stale Online ext4 Metadata Check Snapshots.
[ OK ] Started Login Service.
[ OK ] Started Unattended Upgrades Shutdown.
Starting Authorization Manager…
[ OK ] Started Dispatcher daemon for systemd-networkd.
[ OK ] Started Authorization Manager.
[ OK ] Started Accounts Service.
[ OK ] Started Snap Daemon.
Starting Wait until snapd is fully seeded…
[ OK ] Finished Wait until snapd is fully seeded.
Starting Apply the settings specified in cloud-config…
[ OK ] Finished Service for snap application lxd.activate.
[ OK ] Reached target Multi-User System.
[ OK ] Reached target Graphical Interface.
Starting Update UTMP about System Runlevel Changes…
[ OK ] Finished Update UTMP about System Runlevel Changes.
[ 2232.561778] cloud-init[334]: Cloud-init v. 21.1-19-gbad84ad4-0ubuntu1~20.04.2 running ‘modules:config’ at Mon, 17 May 2021 09:26:14 +0000. Up 124.86 seconds.
[ OK ] Finished Apply the settings specified in cloud-config.
Starting Execute cloud user/final scripts…
[ 2233.122982] cloud-init[339]: Cloud-init v. 21.1-19-gbad84ad4-0ubuntu1~20.04.2 running ‘modules:final’ at Mon, 17 May 2021 09:26:14 +0000. Up 125.42 seconds.
[ 2233.123183] cloud-init[339]: Cloud-init v. 21.1-19-gbad84ad4-0ubuntu1~20.04.2 finished at Mon, 17 May 2021 09:26:14 +0000. Datasource DataSourceNoCloud [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net]. Up 125.52 seconds
[ OK ] Finished Execute cloud user/final scripts.
[ OK ] Reached target Cloud-init target.

Ubuntu 20.04.2 LTS Ubuntu console

Ubuntu login: ubuntu
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.3.18-57-default x86_64)

System information as of Mon May 17 09:27:36 UTC 2021

System load: 0.54 Swap usage: 0% Users logged in: 0
Usage of /home: unknown Temperature: 84.0 C
Memory usage: 1% Processes: 24

1 update can be applied immediately.
To see these additional updates run: apt list --upgradable

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user “root”), use "sudo ".
See “man sudo_root” for details.

I would need to see the trace log of such a container right after a lxc exec attempt. So you need to switch the daemon into debug and verbose mode by passing --debug --verbose to LXD at startup.

tk@hp:~> lxc exec --debug --verbose Ubuntu bash
DBUG[05-17|11:34:55] Connecting to a local LXD over a Unix socket
DBUG[05-17|11:34:55] Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
DBUG[05-17|11:34:55] Got response struct from LXD
DBUG[05-17|11:34:55]
{
“config”: {},
“api_extensions”: [
“storage_zfs_remove_snapshots”,
“container_host_shutdown_timeout”,
“container_stop_priority”,
“container_syscall_filtering”,
“auth_pki”,
“container_last_used_at”,
“etag”,
“patch”,
“usb_devices”,
“https_allowed_credentials”,
“image_compression_algorithm”,
“directory_manipulation”,
“container_cpu_time”,
“storage_zfs_use_refquota”,
“storage_lvm_mount_options”,
“network”,
“profile_usedby”,
“container_push”,
“container_exec_recording”,
“certificate_update”,
“container_exec_signal_handling”,
“gpu_devices”,
“container_image_properties”,
“migration_progress”,
“id_map”,
“network_firewall_filtering”,
“network_routes”,
“storage”,
“file_delete”,
“file_append”,
“network_dhcp_expiry”,
“storage_lvm_vg_rename”,
“storage_lvm_thinpool_rename”,
“network_vlan”,
“image_create_aliases”,
“container_stateless_copy”,
“container_only_migration”,
“storage_zfs_clone_copy”,
“unix_device_rename”,
“storage_lvm_use_thinpool”,
“storage_rsync_bwlimit”,
“network_vxlan_interface”,
“storage_btrfs_mount_options”,
“entity_description”,
“image_force_refresh”,
“storage_lvm_lv_resizing”,
“id_map_base”,
“file_symlinks”,
“container_push_target”,
“network_vlan_physical”,
“storage_images_delete”,
“container_edit_metadata”,
“container_snapshot_stateful_migration”,
“storage_driver_ceph”,
“storage_ceph_user_name”,
“resource_limits”,
“storage_volatile_initial_source”,
“storage_ceph_force_osd_reuse”,
“storage_block_filesystem_btrfs”,
“resources”,
“kernel_limits”,
“storage_api_volume_rename”,
“macaroon_authentication”,
“network_sriov”,
“console”,
“restrict_devlxd”,
“migration_pre_copy”,
“infiniband”,
“maas_network”,
“devlxd_events”,
“proxy”,
“network_dhcp_gateway”,
“file_get_symlink”,
“network_leases”,
“unix_device_hotplug”,
“storage_api_local_volume_handling”,
“operation_description”,
“clustering”,
“event_lifecycle”,
“storage_api_remote_volume_handling”,
“nvidia_runtime”,
“container_mount_propagation”,
“container_backup”,
“devlxd_images”,
“container_local_cross_pool_handling”,
“proxy_unix”,
“proxy_udp”,
“clustering_join”,
“proxy_tcp_udp_multi_port_handling”,
“network_state”,
“proxy_unix_dac_properties”,
“container_protection_delete”,
“unix_priv_drop”,
“pprof_http”,
“proxy_haproxy_protocol”,
“network_hwaddr”,
“proxy_nat”,
“network_nat_order”,
“container_full”,
“candid_authentication”,
“backup_compression”,
“candid_config”,
“nvidia_runtime_config”,
“storage_api_volume_snapshots”,
“storage_unmapped”,
“projects”,
“candid_config_key”,
“network_vxlan_ttl”,
“container_incremental_copy”,
“usb_optional_vendorid”,
“snapshot_scheduling”,
“snapshot_schedule_aliases”,
“container_copy_project”,
“clustering_server_address”,
“clustering_image_replication”,
“container_protection_shift”,
“snapshot_expiry”,
“container_backup_override_pool”,
“snapshot_expiry_creation”,
“network_leases_location”,
“resources_cpu_socket”,
“resources_gpu”,
“resources_numa”,
“kernel_features”,
“id_map_current”,
“event_location”,
“storage_api_remote_volume_snapshots”,
“network_nat_address”,
“container_nic_routes”,
“rbac”,
“cluster_internal_copy”,
“seccomp_notify”,
“lxc_features”,
“container_nic_ipvlan”,
“network_vlan_sriov”,
“storage_cephfs”,
“container_nic_ipfilter”,
“resources_v2”,
“container_exec_user_group_cwd”,
“container_syscall_intercept”,
“container_disk_shift”,
“storage_shifted”,
“resources_infiniband”,
“daemon_storage”,
“instances”,
“image_types”,
“resources_disk_sata”,
“clustering_roles”,
“images_expiry”,
“resources_network_firmware”,
“backup_compression_algorithm”,
“ceph_data_pool_name”,
“container_syscall_intercept_mount”,
“compression_squashfs”,
“container_raw_mount”,
“container_nic_routed”,
“container_syscall_intercept_mount_fuse”,
“container_disk_ceph”,
“virtual-machines”,
“image_profiles”,
“clustering_architecture”,
“resources_disk_id”,
“storage_lvm_stripes”,
“vm_boot_priority”,
“unix_hotplug_devices”,
“api_filtering”,
“instance_nic_network”,
“clustering_sizing”,
“firewall_driver”,
“projects_limits”,
“container_syscall_intercept_hugetlbfs”,
“limits_hugepages”,
“container_nic_routed_gateway”,
“projects_restrictions”,
“custom_volume_snapshot_expiry”,
“volume_snapshot_scheduling”,
“trust_ca_certificates”,
“snapshot_disk_usage”,
“clustering_edit_roles”,
“container_nic_routed_host_address”,
“container_nic_ipvlan_gateway”,
“resources_usb_pci”,
“resources_cpu_threads_numa”,
“resources_cpu_core_die”,
“api_os”,
“container_nic_routed_host_table”,
“container_nic_ipvlan_host_table”,
“container_nic_ipvlan_mode”,
“resources_system”,
“images_push_relay”,
“network_dns_search”,
“container_nic_routed_limits”,
“instance_nic_bridged_vlan”,
“network_state_bond_bridge”,
“usedby_consistency”,
“custom_block_volumes”,
“clustering_failure_domains”,
“resources_gpu_mdev”,
“console_vga_type”,
“projects_limits_disk”,
“network_type_macvlan”,
“network_type_sriov”,
“container_syscall_intercept_bpf_devices”,
“network_type_ovn”,
“projects_networks”,
“projects_networks_restricted_uplinks”,
“custom_volume_backup”,
“backup_override_name”,
“storage_rsync_compression”,
“network_type_physical”,
“network_ovn_external_subnets”,
“network_ovn_nat”,
“network_ovn_external_routes_remove”,
“tpm_device_type”,
“storage_zfs_clone_copy_rebase”,
“gpu_mdev”,
“resources_pci_iommu”,
“resources_network_usb”,
“resources_disk_address”,
“network_physical_ovn_ingress_mode”,
“network_ovn_dhcp”,
“network_physical_routes_anycast”,
“projects_limits_instances”,
“network_state_vlan”,
“instance_nic_bridged_port_isolation”,
“instance_bulk_state_change”,
“network_gvrp”,
“instance_pool_move”,
“gpu_sriov”,
“pci_device_type”,
“storage_volume_state”,
“network_acl”,
“migration_stateful”,
“disk_state_quota”,
“storage_ceph_features”,
“projects_compression”,
“projects_images_remote_cache_expiry”,
“certificate_project”,
“network_ovn_acl”,
“projects_images_auto_update”,
“projects_restricted_cluster_target”,
“images_default_architecture”,
“network_ovn_acl_defaults”,
“gpu_mig”,
“project_usage”,
“network_bridge_acl”,
“warnings”,
“projects_restricted_backups_and_snapshots”,
“clustering_join_token”,
“clustering_description”
],
“api_status”: “stable”,
“api_version”: “1.0”,
“auth”: “trusted”,
“public”: false,
“auth_methods”: [
“tls”
],
“environment”: {
“addresses”: [],
“architectures”: [
“x86_64”,
“i686”
],
“certificate”: “-----BEGIN CERTIFICATE-----\nMIICGDCCAZ6gAwIBAgIQCpL2zoUzkSelxBWhRaOykTAKBggqhkjOPQQDAzA7MRww\nGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRswGQYDVQQDDBJyb290QGhwLnNh\ndGthcy5sYWIwHhcNMjEwNTE3MDg1MjA2WhcNMzEwNTE1MDg1MjA2WjA7MRwwGgYD\nVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRswGQYDVQQDDBJyb290QGhwLnNhdGth\ncy5sYWIwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAASOLKmy0WF4PbardVg+q6kFUcQ3\nfj10pUhKFqT08WoZbb5Lb9QOatZ+BIOUZQOTZmBA/2EkDCI0jJbiju3UDtHPlKRh\nJ1nMt3tgDk6LkX5Fe0GRvs2LCMnBHrP7V2eZtQGjZzBlMA4GA1UdDwEB/wQEAwIF\noDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMDAGA1UdEQQpMCeC\nDWhwLnNhdGthcy5sYWKHBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAEwCgYIKoZIzj0E\nAwMDaAAwZQIwMmSUFzOldtDqsn47zIvUhoLp7z6XD/5zdGXQJF7JwODzPIOiqEbD\nZBG12C43/TkHAjEAmeAohikJZVUSjtiMGvewipb0nPYQ/oajxgivLijZY4Ch1Fus\nvdSA4EMuldAR3zxj\n-----END CERTIFICATE-----\n”,
“certificate_fingerprint”: “373a1b9112d74b8a61d5f77ead237cbe15277122ad56c133505e9d75c21d8c9b”,
“driver”: “lxc | qemu”,
“driver_version”: “4.0.9 | 5.2.0”,
“firewall”: “nftables”,
“kernel”: “Linux”,
“kernel_architecture”: “x86_64”,
“kernel_features”: {
“netnsid_getifaddrs”: “true”,
“seccomp_listener”: “true”,
“seccomp_listener_continue”: “false”,
“shiftfs”: “false”,
“uevent_injection”: “true”,
“unpriv_fscaps”: “true”
},
“kernel_version”: “5.3.18-57-default”,
“lxc_features”: {
“cgroup2”: “true”,
“devpts_fd”: “true”,
“mount_injection_file”: “true”,
“network_gateway_device_route”: “true”,
“network_ipvlan”: “true”,
“network_l2proxy”: “true”,
“network_phys_macvlan_mtu”: “true”,
“network_veth_router”: “true”,
“pidfd”: “true”,
“seccomp_allow_deny_syntax”: “true”,
“seccomp_notify”: “true”,
“seccomp_proxy_send_notify_fd”: “true”
},
“os_name”: “openSUSE Leap”,
“os_version”: “15.3”,
“project”: “default”,
“server”: “lxd”,
“server_clustered”: false,
“server_name”: “hp.satkas.lab”,
“server_pid”: 3464,
“server_version”: “4.14”,
“storage”: “btrfs”,
“storage_version”: “4.15.1”
}
}
DBUG[05-17|11:34:55] Connected to the websocket: ws://unix.socket/1.0/events
DBUG[05-17|11:34:55] Sending request to LXD method=POST url=http://unix.socket/1.0/instances/Ubuntu/exec etag=
DBUG[05-17|11:34:55]
{
“command”: [
“bash”
],
“wait-for-websocket”: true,
“interactive”: true,
“environment”: {
“TERM”: “xterm-256color”
},
“width”: 198,
“height”: 43,
“record-output”: false,
“user”: 0,
“group”: 0,
“cwd”: “”
}
DBUG[05-17|11:34:55] Got operation from LXD
DBUG[05-17|11:34:55]
{
“id”: “23102a50-0700-45c0-914a-ad50cd2d1c8d”,
“class”: “websocket”,
“description”: “Executing command”,
“created_at”: “2021-05-17T11:34:55.625987441+02:00”,
“updated_at”: “2021-05-17T11:34:55.625987441+02:00”,
“status”: “Running”,
“status_code”: 103,
“resources”: {
“containers”: [
“/1.0/containers/Ubuntu”
],
“instances”: [
“/1.0/instances/Ubuntu”
]
},
“metadata”: {
“command”: [
“bash”
],
“environment”: {
“HOME”: “/root”,
“LANG”: “C.UTF-8”,
“PATH”: “/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin”,
“TERM”: “xterm-256color”,
“USER”: “root”
},
“fds”: {
“0”: “955249d1c9731da73e688f14bd8ce0ea72b2966e2be35f92a5390ba53a6890a2”,
“control”: “861041cc2a7e5627bd8d680f02bd2149a0b2387164ee524bf7518a3954d3478c”
},
“interactive”: true
},
“may_cancel”: false,
“err”: “”,
“location”: “none”
}
DBUG[05-17|11:34:55] Connected to the websocket: ws://unix.socket/1.0/operations/23102a50-0700-45c0-914a-ad50cd2d1c8d/websocket?secret=861041cc2a7e5627bd8d680f02bd2149a0b2387164ee524bf7518a3954d3478c
DBUG[05-17|11:34:55] Connected to the websocket: ws://unix.socket/1.0/operations/23102a50-0700-45c0-914a-ad50cd2d1c8d/websocket?secret=955249d1c9731da73e688f14bd8ce0ea72b2966e2be35f92a5390ba53a6890a2
DBUG[05-17|11:34:55] Sending request to LXD method=GET url=http://unix.socket/1.0/operations/23102a50-0700-45c0-914a-ad50cd2d1c8d etag=
DBUG[05-17|11:34:55] Got response struct from LXD
DBUG[05-17|11:34:55]
{
“id”: “23102a50-0700-45c0-914a-ad50cd2d1c8d”,
“class”: “websocket”,
“description”: “Executing command”,
“created_at”: “2021-05-17T11:34:55.625987441+02:00”,
“updated_at”: “2021-05-17T11:34:55.625987441+02:00”,
“status”: “Running”,
“status_code”: 103,
“resources”: {
“containers”: [
“/1.0/containers/Ubuntu”
],
“instances”: [
“/1.0/instances/Ubuntu”
]
},
“metadata”: {
“command”: [
“bash”
],
“environment”: {
“HOME”: “/root”,
“LANG”: “C.UTF-8”,
“PATH”: “/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin”,
“TERM”: “xterm-256color”,
“USER”: “root”
},
“fds”: {
“0”: “955249d1c9731da73e688f14bd8ce0ea72b2966e2be35f92a5390ba53a6890a2”,
“control”: “861041cc2a7e5627bd8d680f02bd2149a0b2387164ee524bf7518a3954d3478c”
},
“interactive”: true
},
“may_cancel”: false,
“err”: “”,
“location”: “none”
}
Error: Failed to retrieve PID of executing child process

tk@hp:~> lxc version
Client version: 4.14
Server version: 4.14
tk@hp:~> lxd --version
4.14

Unfortunately, my instance did not get an IP address automatically

ubuntu@Ubuntu:~$ ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:16:3e:a7:1f:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet6 fe80::216:3eff:fea7:1f21/64 scope link 
           valid_lft forever preferred_lft forever

My network on host lxd

6: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:1f:18:c7 brd ff:ff:ff:ff:ff:ff
    inet 10.49.31.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe1f:18c7/64 scope link 
       valid_lft forever preferred_lft forever
10: veth62e4c699@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 06:d1:71:8c:52:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0

I noticed that multipass generated firewall entries for me. LXD no

tk@hp:~> sudo iptables -S
[sudo] hasło użytkownika root: 
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i mpqemubr0 -p tcp -m tcp --dport 53 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A INPUT -i mpqemubr0 -p udp -m udp --dport 53 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A INPUT -i mpqemubr0 -p udp -m udp --dport 67 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A FORWARD -i mpqemubr0 -o mpqemubr0 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A FORWARD -s 10.228.221.0/24 -i mpqemubr0 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A FORWARD -d 10.228.221.0/24 -o mpqemubr0 -m conntrack --ctstate RELATED,ESTABLISHED -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A FORWARD -i mpqemubr0 -m comment --comment "generated for Multipass network mpqemubr0" -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -o mpqemubr0 -m comment --comment "generated for Multipass network mpqemubr0" -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -o mpqemubr0 -p tcp -m tcp --sport 53 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A OUTPUT -o mpqemubr0 -p udp -m udp --sport 53 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT
-A OUTPUT -o mpqemubr0 -p udp -m udp --sport 67 -m comment --comment "generated for Multipass network mpqemubr0" -j ACCEPT