So, I understand the scenario I’m going to describe is a bit convoluted, but for reasons regarding my CI/CD infrastructure, I’m currently in a situation where I want to build and run podman images inside LXD containers.
I’m not trying to use podman on rootless mode, so every podman command is ran as sudo.
But when running a fresh LXD container using the base image images:centos/8-Stream
, I install podman and try pulling the quay.io/centos/centos:stream8
podman image, and this pull fails with a pretty generic error message.
I am aware of this fix/workaround:
- Editing the file at
/etc/containers/storage.conf
and uncommenting this line:
[storage.options.overlay]
mount_program = "/usr/bin/fuse-overlayfs"
But at this point, I just wanted to understand a little better why this is happening, since I’m not trying to use podman in rootless mode, which is one of the main reasons I believe should cause such issue.
On top of that, I am able to pull and work just fine with some newer podman images, such as quay.io/centos/centos:stream9
and registry.fedoraproject.org/fedora:35
.
I understand from this post that from kernel 5.13 onwards podman is getting rootless overlay support, and since I still have to support some legacy servers running CentOS Stream 8 I am pseudo-locked to kernel 4.18. But even then, shouldn’t this kernel 5.13 support be related only to podman on rootless mode? Why am I getting such problems even running it with sudo? Can anyone shed some light on the situation for me?
Also, please let me know if any more info is needed.
I really apreciate any help!
Steps to reproduce
- Boot up a new machine or vm running CentOS Stream 8
- Launch a fresh LXD container using the
images:centos/8-Stream
base image:
lxc launch images:centos/8-Stream testcontainer
- From inside the LXD container, simply install podman and try pulling the
quay.io/centos/centos:stream8
image:
lxc exec testcontainer -- bash
dnf update -y && dnf install podman -y
podman pull quay.io/centos/centos:stream8
- You should get an output like this:
Trying to pull quay.io/centos/centos:stream8...
Getting image source signatures
Copying blob a0b8f3931ffa skipped: already exists
Copying blob 04f0eb705bff done
Copying blob 4a7e61ebcfec done
Copying blob 1ac891d08dc2 done
Error: writing blob: adding layer with blob "sha256:04f0eb705bffc1db22f04bd42987ebb9d5e40c08c0253d0c2a56881c75bc6af8": Error processing tar file(exit status 1): operation not permitted
Useful Information
- Distribution: CentOS
- Distribution version: Stream 8
- Kernel version: 4.18.0-383.el8.x86_64
- LXD version: 5.1-4ae3604
- Host Partition File System: ext4
Useful outputs
lxc info
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIICGzCCAaGgAwIBAgIQd+pUOTDr2A5+YEbHyKR4xjAKBggqhkjOPQQDAzA8MRww
GgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRwwGgYDVQQDDBNyb290QHBvZG1h
bi10ZXN0aW5nMB4XDTIyMDUxMTA5MTUyNFoXDTMyMDUwODA5MTUyNFowPDEcMBoG
A1UEChMTbGludXhjb250YWluZXJzLm9yZzEcMBoGA1UEAwwTcm9vdEBwb2RtYW4t
dGVzdGluZzB2MBAGByqGSM49AgEGBSuBBAAiA2IABFqXQbRJx2Lmnnbahbyy+w/f
dU2j43Xvj8z2VLm9n+sNKjaxzsmtarYYtcSERh2Yxr7CJ08QtiT+E/3WmMlnMSnV
5mfU+x4uXinXcoCxEDlcEsV258IfsIjTPiov6zGaYaNoMGYwDgYDVR0PAQH/BAQD
AgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwMQYDVR0RBCow
KIIOcG9kbWFuLXRlc3RpbmeHBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAEwCgYIKoZI
zj0EAwMDaAAwZQIxAOrmwS7Qx/Zs0mjVkLFk/rW6K556yHePbM8o9Q0gFwZF3JWD
eaUbdnTkib/9T8iOEgIwBvpwRKOQv+USyaYEHejMCOOSo6s2VswHIuVWAgmnjRcy
C9tb9eJFSMUlGVXLk/o3
-----END CERTIFICATE-----
certificate_fingerprint: 61a2d8d6e4b2165ca9917ce25fd5239a24c94415bc359c02e49d5ed130f51906
driver: lxc
driver_version: 4.0.12
firewall: xtables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
idmapped_mounts: "false"
netnsid_getifaddrs: "true"
seccomp_listener: "false"
seccomp_listener_continue: "false"
shiftfs: "false"
uevent_injection: "true"
unpriv_fscaps: "true"
kernel_version: 4.18.0-383.el8.x86_64
lxc_features:
cgroup2: "true"
core_scheduling: "true"
devpts_fd: "true"
idmapped_mounts_v2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: CentOS Stream
os_version: "8"
project: default
server: lxd
server_clustered: false
server_event_mode: full-mesh
server_name: podman-testing
server_pid: 5591
server_version: "5.1"
storage: lvm
storage_version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.43.0
storage_supported_drivers:
- name: btrfs
version: 5.4.1
remote: false
- name: cephfs
version: 15.2.14
remote: true
- name: dir
version: "1"
remote: false
- name: lvm
version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.43.0
remote: false
- name: ceph
version: 15.2.14
remote: true
lxc storage ls
+---------+--------+--------------------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+--------------------------------------------+-------------+---------+---------+
| default | lvm | /var/snap/lxd/common/lxd/disks/default.img | | 3 | CREATED |
+---------+--------+--------------------------------------------+-------------+---------+---------+
lxc storage show default
config:
lvm.thinpool_name: LXDThinPool
lvm.vg_name: default
size: 7GB
source: /var/snap/lxd/common/lxd/disks/default.img
description: ""
name: default
driver: lvm
used_by:
- /1.0/images/ed4a6eb25898b301c93dafa6db5b081d207ca2db53a1a6a6a061baa8f78e11c9
- /1.0/instances/testcontainer
- /1.0/profiles/default
status: Created
locations:
- none
dmesg
[ 5.890595] Console: switching to colour frame buffer device 128x48
[ 5.897174] virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
[ 8.042443] virtio_net virtio1 eth0: renamed from enp1s0
[ 8.257117] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 9.651824] EXT4-fs (sda1): resizing filesystem from 878848 to 9983483 blocks
[ 9.755856] EXT4-fs (sda1): resized filesystem to 9983483
[ 154.925329] SELinux: Converting 347 SID table entries...
[ 154.926800] SELinux: policy capability network_peer_controls=1
[ 154.927347] SELinux: policy capability open_perms=1
[ 154.927800] SELinux: policy capability extended_socket_class=1
[ 154.928332] SELinux: policy capability always_check_network=0
[ 154.928854] SELinux: policy capability cgroup_seclabel=1
[ 154.929338] SELinux: policy capability nnp_nosuid_transition=1
[ 233.230875] SELinux: Converting 361 SID table entries...
[ 233.233687] SELinux: policy capability network_peer_controls=1
[ 233.234693] SELinux: policy capability open_perms=1
[ 233.235547] SELinux: policy capability extended_socket_class=1
[ 233.236530] SELinux: policy capability always_check_network=0
[ 233.237498] SELinux: policy capability cgroup_seclabel=1
[ 233.238421] SELinux: policy capability nnp_nosuid_transition=1
[ 249.611083] loop: module loaded
[ 249.615297] loop0: detected capacity change from 0 to 4096
[ 249.638837] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[ 255.193719] loop0: detected capacity change from 0 to 4096
[ 258.022818] loop0: detected capacity change from 0 to 46845952
[ 259.519094] loop1: detected capacity change from 0 to 4096
[ 264.699867] loop1: detected capacity change from 0 to 64909312
[ 267.866600] loop2: detected capacity change from 0 to 84246528
[ 333.095362] new mount options do not match the existing superblock, will be ignored
[ 333.123335] fuse: init (API version 7.33)
[ 334.164721] NET: Registered protocol family 40
[ 335.685472] device-mapper: uevent: version 1.0.3
[ 335.687910] device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com
[ 379.802312] loop3: detected capacity change from 0 to 7000000000
[ 381.449243] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[ 381.478021] IPv6: ADDRCONF(NETDEV_UP): lxdbr0: link is not ready
[ 422.183322] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: discard
[ 440.881989] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: discard
[ 441.545705] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: discard
[ 441.875666] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: discard
[ 444.458185] IPv6: ADDRCONF(NETDEV_UP): veth300231c0: link is not ready
[ 444.522649] lxdbr0: port 1(veth300231c0) entered blocking state
[ 444.523758] lxdbr0: port 1(veth300231c0) entered disabled state
[ 444.524947] device veth300231c0 entered promiscuous mode
[ 444.826585] cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation
[ 444.847307] physG0ixXo: renamed from veth02a33f28
[ 444.853987] eth0: renamed from physG0ixXo
[ 444.859131] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 444.861664] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 444.864082] lxdbr0: port 1(veth300231c0) entered blocking state
[ 444.865964] lxdbr0: port 1(veth300231c0) entered forwarding state
[ 444.868368] IPv6: ADDRCONF(NETDEV_CHANGE): lxdbr0: link becomes ready
[ 491.336615] overlayfs: upper fs does not support xattr, falling back to index=off and metacopy=off.
[ 491.462665] overlayfs: upper fs does not support xattr, falling back to index=off and metacopy=off.
lxc info testcontainer --show-log
Name: testcontainer
Status: RUNNING
Type: container
Architecture: x86_64
PID: 6016
Created: 2022/05/11 09:16 UTC
Last Used: 2022/05/11 09:17 UTC
Resources:
Processes: 12
Disk usage:
root: 1.13GiB
CPU usage:
CPU usage (in seconds): 53
Memory usage:
Memory (current): 493.17MiB
Memory (peak): 723.67MiB
Network usage:
eth0:
Type: broadcast
State: UP
Host interface: veth300231c0
MAC address: 00:16:3e:80:67:05
MTU: 1500
Bytes received: 425.48MB
Bytes sent: 968.14kB
Packets received: 35778
Packets sent: 13318
IP addresses:
inet: 10.41.51.38/24 (global)
inet6: fe80::216:3eff:fe80:6705/64 (link)
lo:
Type: loopback
State: UP
MTU: 65536
Bytes received: 0B
Bytes sent: 0B
Packets received: 0
Packets sent: 0
IP addresses:
inet: 127.0.0.1/8 (local)
inet6: ::1/128 (local)
Log:
lxc testcontainer 20220511091714.843 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc testcontainer 20220511091714.844 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc testcontainer 20220511091714.844 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc testcontainer 20220511091714.844 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc testcontainer 20220511091720.849 WARN attach - attach.c:get_attach_context:477 - No security context received
lxc testcontainer 20220511093245.762 WARN attach - attach.c:get_attach_context:477 - No security context received
lxc config show testcontainer --expanded
architecture: x86_64
config:
image.architecture: amd64
image.description: Centos 8-Stream amd64 (20220511_07:08)
image.os: Centos
image.release: 8-Stream
image.serial: "20220511_07:08"
image.type: squashfs
image.variant: default
volatile.base_image: ed4a6eb25898b301c93dafa6db5b081d207ca2db53a1a6a6a061baa8f78e11c9
volatile.cloud-init.instance-id: 06a6a788-02e8-4963-9795-2dbc489b8ab1
volatile.eth0.host_name: veth300231c0
volatile.eth0.hwaddr: 00:16:3e:80:67:05
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
volatile.uuid: 1b8b906d-1595-4d06-8bb8-c5fcbe092ca6
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
/var/snap/lxd/common/lxd/logs/lxd.log
time="2022-05-11T09:15:24Z" level=warning msg="AppArmor support has been disabled because of lack of kernel support"
time="2022-05-11T09:15:24Z" level=warning msg=" - AppArmor support has been disabled, Disabled because of lack of kernel support"
time="2022-05-11T09:15:24Z" level=warning msg=" - Couldn't find the CGroup blkio.weight, disk priority will be ignored"
time="2022-05-11T09:15:24Z" level=warning msg="Instance type not operational" driver=qemu err="KVM support is missing (no /dev/kvm)" type=virtual-machine
time="2022-05-11T09:15:25Z" level=warning msg="Failed to initialize fanotify, falling back on fsnotify" err="Failed to initialize fanotify: invalid argument"