CentOS 8 Stream container not assigned IPv4/IPv6 addresses

Using Ubuntu 20.04 LTS for LXD host with LXD 4.8 installed via snap.

lxd init used a prior created lxdbr0 bridge network

sudo snap install lxd
lxc network create lxdbr0 --type=bridge ipv4.address=10.10.10.1/24
sudo lxd init
df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs          tmpfs     394M  1.1M  393M   1% /run
/dev/vda1      ext4       79G  6.3G   70G   9% /
tmpfs          tmpfs     2.0G     0  2.0G   0% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs          tmpfs     394M     0  394M   0% /run/user/0
/dev/loop0     squashfs   32M   32M     0 100% /snap/snapd/10492
/dev/loop1     squashfs   56M   56M     0 100% /snap/core18/1932
/dev/loop2     squashfs   72M   72M     0 100% /snap/lxd/18546
tmpfs          tmpfs     1.0M     0  1.0M   0% /var/snap/lxd/common/ns

I created 6x LXD containers for various OSes and all but one - the very last one created has IPv4/IPv6 addresses assigned for some reason ? why would centos83stream container not have IPv4/IPv6 assigned ?

lxc launch ubuntu:20.04 ubuntu20
lxc launch images:debian/10 debian10
lxc launch images:centos/7 centos79
lxc launch images:centos/8 centos80
lxc launch images:oracle/8 oracle80
lxc launch images:centos/8-Stream centos83stream
lxc list
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
|      NAME      |  STATE  |        IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos79       | RUNNING | 10.10.10.93 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fe16:7dfb (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos80       | RUNNING | 10.10.10.101 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe64:cf6d (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83stream | RUNNING |                     |                                              | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| debian10       | RUNNING | 10.10.10.19 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fef8:ede (eth0)  | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| oracle80       | RUNNING | 10.10.10.7 (eth0)   | fd42:c19:1dbf:b8d7:216:3eff:fefc:8d02 (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| ubuntu20       | RUNNING | 10.10.10.131 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:feec:546f (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
lxc network show lxdbr0
config:
  ipv4.address: 10.10.10.1/24
  ipv6.address: fd42:c19:1dbf:b8d7::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/centos79
- /1.0/instances/centos80
- /1.0/instances/centos83stream
- /1.0/instances/debian10
- /1.0/instances/oracle80
- /1.0/instances/ubuntu20
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
lxc network list
+--------+----------+---------+---------------+--------------------------+-------------+---------+
|  NAME  |   TYPE   | MANAGED |     IPV4      |           IPV6           | DESCRIPTION | USED BY |
+--------+----------+---------+---------------+--------------------------+-------------+---------+
| eth0   | physical | NO      |               |                          |             | 0       |
+--------+----------+---------+---------------+--------------------------+-------------+---------+
| eth1   | physical | NO      |               |                          |             | 0       |
+--------+----------+---------+---------------+--------------------------+-------------+---------+
| eth2   | physical | NO      |               |                          |             | 0       |
+--------+----------+---------+---------------+--------------------------+-------------+---------+
| lxdbr0 | bridge   | YES     | 10.10.10.1/24 | fd42:c19:1dbf:b8d7::1/64 |             | 7       |
+--------+----------+---------+---------------+--------------------------+-------------+---------+
ifconfig lxdbr0
lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.10.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fd42:c19:1dbf:b8d7::1  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::216:3eff:feee:6605  prefixlen 64  scopeid 0x20<link>
        ether 00:16:3e:ee:66:05  txqueuelen 1000  (Ethernet)
        RX packets 20990  bytes 1357591 (1.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23575  bytes 248043853 (248.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

on working centos80

lxc exec centos80 -- ip route show
default via 10.10.10.1 dev eth0 proto dhcp metric 100 
10.10.10.0/24 dev eth0 proto kernel scope link src 10.10.10.101 metric 100

on not working centos83stream output is empty

lxc exec centos83stream -- ip route show

diff of centos80 vs centos83stream installed packages

diff -u <(lxc exec centos80 -- yum -q list installed | tr -s ' ' | column -t| awk '{print $1}') <(lxc exec centos83stream -- yum -q list installed | tr -s ' ' | column -t| awk '{print $1}')
--- /dev/fd/63  2020-12-13 09:02:39.180946457 +0000
+++ /dev/fd/62  2020-12-13 09:02:39.180946457 +0000
@@ -10,17 +10,19 @@
 bzip2-libs.x86_64
 ca-certificates.noarch
 centos-gpg-keys.noarch
-centos-linux-release.noarch
-centos-linux-repos.noarch
+centos-stream-release.noarch
+centos-stream-repos.noarch
 chkconfig.x86_64
 coreutils.x86_64
 coreutils-common.x86_64
+cpio.x86_64
 cracklib.x86_64
 cracklib-dicts.x86_64
 cronie.x86_64
 cronie-noanacron.x86_64
 crontabs.noarch
 crypto-policies.noarch
+crypto-policies-scripts.noarch
 cryptsetup-libs.x86_64
 curl.x86_64
 cyrus-sasl-lib.x86_64
@@ -37,12 +39,13 @@
 diffutils.x86_64
 dnf.noarch
 dnf-data.noarch
+dracut.x86_64
 elfutils-debuginfod-client.x86_64
 elfutils-default-yama-scope.noarch
 elfutils-libelf.x86_64
 elfutils-libs.x86_64
-epel-release.noarch
 expat.x86_64
+file.x86_64
 file-libs.x86_64
 filesystem.x86_64
 findutils.x86_64
@@ -51,6 +54,8 @@
 gdbm-libs.x86_64
 geolite2-city.noarch
 geolite2-country.noarch
+gettext.x86_64
+gettext-libs.x86_64
 glib2.x86_64
 glibc.x86_64
 glibc-all-langpacks.x86_64
@@ -61,7 +66,12 @@
 gnutls.x86_64
 gpgme.x86_64
 grep.x86_64
+grub2-common.noarch
+grub2-tools.x86_64
+grub2-tools-minimal.x86_64
+grubby.x86_64
 gzip.x86_64
+hardlink.x86_64
 ima-evm-utils.x86_64
 info.x86_64
 initscripts.x86_64
@@ -69,10 +79,14 @@
 iproute.x86_64
 iptables-libs.x86_64
 iputils.x86_64
-jq.x86_64
 json-c.x86_64
+kbd.x86_64
+kbd-legacy.noarch
+kbd-misc.noarch
 keyutils-libs.x86_64
+kmod.x86_64
 kmod-libs.x86_64
+kpartx.x86_64
 krb5-libs.x86_64
 libacl.x86_64
 libarchive.x86_64
@@ -83,8 +97,8 @@
 libcap-ng.x86_64
 libcom_err.x86_64
 libcomps.x86_64
+libcroco.x86_64
 libcurl.x86_64
-libcurl-devel.x86_64
 libdb.x86_64
 libdb-utils.x86_64
 libdnf.x86_64
@@ -95,8 +109,11 @@
 libffi.x86_64
 libgcc.x86_64
 libgcrypt.x86_64
+libgomp.x86_64
 libgpg-error.x86_64
 libidn2.x86_64
+libkcapi.x86_64
+libkcapi-hmaccalc.x86_64
 libksba.x86_64
 libmaxminddb.x86_64
 libmetalink.x86_64
@@ -107,7 +124,6 @@
 libnghttp2.x86_64
 libnsl2.x86_64
 libpcap.x86_64
-libpkgconf.x86_64
 libpsl.x86_64
 libpwquality.x86_64
 librepo.x86_64
@@ -138,29 +154,27 @@
 logrotate.x86_64
 lua-libs.x86_64
 lz4-libs.x86_64
+memstrack.x86_64
 mpfr.x86_64
 ncurses.x86_64
 ncurses-base.noarch
 ncurses-libs.x86_64
 nettle.x86_64
 npth.x86_64
-oniguruma.x86_64
 openldap.x86_64
 openssh.x86_64
 openssh-clients.x86_64
-openssh-server.x86_64
 openssl.x86_64
 openssl-libs.x86_64
 openssl-pkcs11.x86_64
+os-prober.x86_64
 p11-kit.x86_64
 p11-kit-trust.x86_64
 pam.x86_64
 passwd.x86_64
 pcre.x86_64
 pcre2.x86_64
-pkgconf.x86_64
-pkgconf-m4.noarch
-pkgconf-pkg-config.x86_64
+pigz.x86_64
 platform-python.x86_64
 platform-python-pip.noarch
 platform-python-setuptools.noarch
@@ -192,12 +206,14 @@
 systemd.x86_64
 systemd-libs.x86_64
 systemd-pam.x86_64
+systemd-udev.x86_64
 trousers.x86_64
 trousers-lib.x86_64
 tzdata.noarch
 util-linux.x86_64
 vim-minimal.x86_64
-wget.x86_64
+which.x86_64
+xz.x86_64
 xz-libs.x86_64
 yum.noarch
 zlib.x86_64

centos80 processes

lxc exec centos80 -- ps aufxww
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         424  0.0  0.2 264508  2644 pts/0    Rs+  09:12   0:00 ps aufxww
root           1  0.0  0.6 176540  6300 ?        Ss   01:55   0:00 /sbin/init
root          39  0.0  0.5  94012  5528 ?        Ss   01:55   0:00 /usr/lib/systemd/systemd-journald
root          49  0.0  0.4  83772  4172 ?        Ss   01:55   0:00 /usr/lib/systemd/systemd-logind
dbus          51  0.0  0.2  54140  2600 ?        Ss   01:55   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root          52  0.0  1.0 368840 10436 ?        Ssl  01:55   0:00 /usr/sbin/NetworkManager --no-daemon
root          61  0.0  0.2  22860  2016 ?        Ss   01:55   0:00 /usr/sbin/crond -n
root          62  0.0  0.1   6516  1016 console  Ss+  01:55   0:00 /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
root          71  0.0  0.4 211556  4908 ?        Ssl  01:55   0:01 /usr/sbin/rsyslogd -n

centos83stream processes

lxc exec centos83stream -- ps aufxww 
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root          84  0.0  0.0 264504  2368 pts/0    Rs+  09:12   0:00 ps aufxww
root           1  0.0  0.1 176576  7216 ?        Ss   09:11   0:00 /sbin/init
root          43  0.0  0.1  91924  5604 ?        Ss   09:11   0:00 /usr/lib/systemd/systemd-journald
root          45  0.0  0.1  97276  5220 ?        Ss   09:11   0:00 /usr/lib/systemd/systemd-udevd
dbus          55  0.0  0.0  54144  2780 ?        Ss   09:11   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root          56  0.0  0.1  83780  5088 ?        Ss   09:11   0:00 /usr/lib/systemd/systemd-logind
root          61  0.0  0.2 368628 10328 ?        Ssl  09:11   0:00 /usr/sbin/NetworkManager --no-daemon
root          71  0.0  0.0  22860  2124 ?        Ss   09:11   0:00 /usr/sbin/crond -n
root          72  0.0  0.0   6512  1116 console  Ss+  09:11   0:00 /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
root          75  0.0  0.1 211552  4444 ?        Ssl  09:11   0:00 /usr/sbin/rsyslogd -n

diff of centos80 vs centos83streamā€™s config show expanded

diff -u <(lxc config show centos80 --expanded) <(lxc config show centos83stream --expanded)
--- /dev/fd/63  2020-12-13 09:15:44.037896268 +0000
+++ /dev/fd/62  2020-12-13 09:15:44.037896268 +0000
@@ -1,25 +1,22 @@
 architecture: x86_64
 config:
-  boot.autostart: "true"
   image.architecture: amd64
-  image.description: Centos 8 amd64 (20201211_07:08)
+  image.description: Centos 8-Stream amd64 (20201213_03:49)
   image.os: Centos
-  image.release: "8"
-  image.serial: "20201211_07:08"
+  image.release: 8-Stream
+  image.serial: "20201213_03:49"
   image.type: squashfs
   image.variant: default
-  limits.memory: 1024MB
-  limits.memory.swap: "true"
   security.syscalls.blacklist: keyctl errno 38
-  volatile.base_image: 7693e677168b04a7339ccd24fc05da1a5ce9e2c59d2a9c060b2aa7cdf9e27be8
-  volatile.eth0.host_name: veth5afa6e81
-  volatile.eth0.hwaddr: 00:16:3e:64:cf:6d
+  volatile.base_image: 8ff65a3eb95be09488bd1a999e24e5eceb0d76d7a862bad8c0d28a96e27f3635
+  volatile.eth0.host_name: vethd5c33949
+  volatile.eth0.hwaddr: 00:16:3e:f0:43:88
   volatile.idmap.base: "0"
   volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
   volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
   volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
   volatile.last_state.power: RUNNING
-  volatile.uuid: b8494936-54c5-4ee2-bc63-a6ce2a1ba68f
+  volatile.uuid: 8fd2fa9e-8cc0-4308-bcf5-916544ebb1b0
 devices:
   eth0:
     name: eth0

If I create a new centos 7.9 container IPs are assigned

lxc launch images:centos/7 centos79-2
lxc list
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
|      NAME      |  STATE  |        IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos79       | RUNNING | 10.10.10.93 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fe16:7dfb (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos79-2     | RUNNING | 10.10.10.43 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:febe:4abf (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos80       | RUNNING | 10.10.10.101 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe64:cf6d (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83stream | RUNNING |                     |                                              | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| debian10       | RUNNING | 10.10.10.19 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fef8:ede (eth0)  | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| oracle80       | RUNNING | 10.10.10.7 (eth0)   | fd42:c19:1dbf:b8d7:216:3eff:fefc:8d02 (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| ubuntu20       | RUNNING | 10.10.10.131 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:feec:546f (eth0) | CONTAINER | 0         |
+----------------+---------+---------------------+----------------------------------------------+-----------+-----------+

now if I create a centos 8.3 image and use the centos 8 to centos 8 stream migration method, all works fine

lxc launch images:centos/8-Stream centos83stream
lxc exec centos83to83stream -- dnf -y install centos-release-stream
lxc exec centos83to83stream -- dnf -y distro-sync
lxc exec centos83to83stream -- cat /etc/redhat-release
CentOS Stream release 8

lxc exec centos83to83stream -- cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream
lxc list
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
|        NAME        |  STATE  |        IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos79           | RUNNING | 10.10.10.93 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fe16:7dfb (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos80           | RUNNING | 10.10.10.101 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe64:cf6d (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83stream     | RUNNING |                     |                                              | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83to83stream | RUNNING | 10.10.10.127 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe19:b4f4 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| debian10           | RUNNING | 10.10.10.19 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fef8:ede (eth0)  | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| oracle80           | RUNNING | 10.10.10.7 (eth0)   | fd42:c19:1dbf:b8d7:216:3eff:fefc:8d02 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| ubuntu20           | RUNNING | 10.10.10.131 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:feec:546f (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+

where would I start troubleshooting ?

cheers

George

1 Like

I think this is a reproducible,

$ lxc launch images:centos/8-Stream centos8stream
Creating centos8stream
Starting centos8stream                      
$ lxc list centos8stream
+---------------+---------+------+------+-----------+-----------+
|     NAME      |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+---------------+---------+------+------+-----------+-----------+
| centos8stream | RUNNING |      |      | CONTAINER | 0         |
+---------------+---------+------+------+-----------+-----------+
$ 

But can dhclient get a DHCP lease?

$ lxc shell centos8stream 
[root@centos8stream ~]# dhclient eth0
[root@centos8stream ~]# ip address
...
25: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
...
    inet 10.10.10.120/24 brd 10.10.10.255 scope global dynamic eth0
...
[root@centos8stream ~]# logout
$ lxc list centos8stream
+---------------+---------+---------------------+------+-----------+-----------+
|     NAME      |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+---------------+---------+---------------------+------+-----------+-----------+
| centos8stream | RUNNING | 10.10.10.120 (eth0) |      | CONTAINER | 0         |
+---------------+---------+---------------------+------+-----------+-----------+

So, manually, the container managed to get an IP address.
The issue is with how to get the container to request a DHCP lease by default, when the container boots up.

I think this is an issue with the packaging of the specific distribution, and can be reported at

2 Likes

cheers @simos seems that is the case

lxc list
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
|        NAME        |  STATE  |        IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos79           | RUNNING | 10.10.10.93 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fe16:7dfb (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos80           | RUNNING | 10.10.10.101 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe64:cf6d (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83stream     | RUNNING | 10.10.10.116 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fef0:4388 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83to83stream | RUNNING | 10.10.10.127 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe19:b4f4 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| debian10           | RUNNING | 10.10.10.19 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fef8:ede (eth0)  | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| oracle80           | RUNNING | 10.10.10.7 (eth0)   | fd42:c19:1dbf:b8d7:216:3eff:fefc:8d02 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| ubuntu20           | RUNNING | 10.10.10.131 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:feec:546f (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
1 Like

might be some clues in NetworkManager logs

lxc restart centos83stream

lxc exec centos83stream -- journalctl -u NetworkManager --no-pager
-- Logs begin at Sun 2020-12-13 12:41:59 UTC, end at Sun 2020-12-13 12:42:00 UTC. --
Dec 13 12:42:00 centos83stream systemd[1]: Starting Network Manager...
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.0843] NetworkManager (version 1.30.0-0.3.el8) is starting... (for the first time)
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.0857] Read config: /etc/NetworkManager/NetworkManager.conf
Dec 13 12:42:00 centos83stream systemd[1]: Started Network Manager.
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.0894] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.0927] manager[0x55eb098cf010]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 12:42:00 centos83stream systemd[1]: NetworkManager.service: Failed to reset devices.list: Operation not permitted
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.6059] hostname: hostname: using hostnamed
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.6744] dns-mgr[0x55eb098c1130]: init: dns=default,systemd-resolved rc-manager=symlink
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.6999] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7003] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7005] manager: Networking is enabled by state file
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7008] dhcp-init: Using DHCP client 'internal'
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7055] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.30.0-0.3.el8/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7056] settings: Loaded settings plugin: keyfile (internal)
Dec 13 12:42:00 centos83stream NetworkManager[61]: <warn>  [1607863320.7070] ifcfg-rh:     invalid MTU ''
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7091] device (lo): carrier: link connected
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7096] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7128] manager: (eth0): new Veth device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 12:42:00 centos83stream NetworkManager[61]: <info>  [1607863320.7291] manager: startup complete

@monstermunchkin are you able to take a look at this?

1 Like

Iā€™m currently looking into this.

1 Like

This PR from @monstermunchkin should fix it:

1 Like

thanks @monstermunchkin @tomp @simos

The updated image is in the mirrors now so should be fixed for new instances launched (you may need to delete the cached image using lxc image ls and lxc image delete though).

1 Like

cheers @tomp was about to ask where the lxc-ci repo fixes fitted into the lxd version/update process :slight_smile:

confirmd new centos 8 stream image is fixed and IP assigned !

lxc list
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
|        NAME        |  STATE  |        IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos79           | RUNNING | 10.10.10.93 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fe16:7dfb (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83           | RUNNING | 10.10.10.101 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe64:cf6d (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83stream     | RUNNING | 10.10.10.85 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fecd:7145 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| centos83to83stream | RUNNING | 10.10.10.127 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:fe19:b4f4 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| debian10           | RUNNING | 10.10.10.19 (eth0)  | fd42:c19:1dbf:b8d7:216:3eff:fef8:ede (eth0)  | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| oracle83           | RUNNING | 10.10.10.7 (eth0)   | fd42:c19:1dbf:b8d7:216:3eff:fefc:8d02 (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
| ubuntu20           | RUNNING | 10.10.10.131 (eth0) | fd42:c19:1dbf:b8d7:216:3eff:feec:546f (eth0) | CONTAINER | 0         |
+--------------------+---------+---------------------+----------------------------------------------+-----------+-----------+
1 Like

Hi,

On the lxd tryit server I ran the following:

root@tryit-precise:~# lxc launch images:centos/8-Stream centos8stream
Creating centos8stream
Starting centos8stream
root@tryit-precise:~# lxc list centos8stream
+---------------+---------+------+------+-----------+-----------+
|     NAME      |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+---------------+---------+------+------+-----------+-----------+
| centos8stream | RUNNING |      |      | CONTAINER | 0         |
+---------------+---------+------+------+-----------+-----------+
root@tryit-precise:~# lxc image ls
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |              DESCRIPTION               | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+------------------------------+
|       | 6940ae22b743 | no     | Centos 8-Stream amd64 (20201222_07:08) | x86_64       | CONTAINER | 126.81MB | Dec 23, 2020 at 2:41am (UTC) |
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+------------------------------+
root@tryit-precise:~# lxc exec centos8stream bash
[root@centos8stream ~]# cat /etc/udev/rules.d/86-nm-unmanaged.rules                                                                                   
ENV{ID_NET_DRIVER}=="veth", ENV{NM_UNMANAGED}="0"

So the change seems doesnā€™t seem to be effective.

Manually running dhclient works.

Just tried that here with a freshly downloaded image and worked fine:

lxc launch images:centos/8-Stream centos8stream
lxc ls
+---------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
|     NAME      |  STATE  |        IPV4         |                     IPV6                      |   TYPE    | SNAPSHOTS |
+---------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
| centos8stream | RUNNING | 10.102.242.8 (eth0) | fd42:442d:225e:59df:216:3eff:fe38:1b7d (eth0) | CONTAINER | 0         |
+---------------+---------+---------------------+-----------------------------------------------+-----------+-----------+

And weā€™ve had other reports of that fix working, so suspect something else is at play here.

That youā€™re using this image inside the Try It service suggests it may be an issue with nested containers.

Ah, no, I see what it is, the Try It service is using macvlan type NIC devices rather than veth devices. Seems like CentOS 8-Stream also has rules to change how those NIC types are handled as well.

Please could I ask you to create an issue at https://github.com/lxc/distrobuilder/issues so @monstermunchkin can take a look when he is back. Thanks

2 Likes

Thanks @tomp, raised https://github.com/lxc/distrobuilder/issues/384

However on my local machine I am using nictype: bridged so it has a veth device, yet an IP is still not assigned.

If that fix is forcing NetworkManager to not manage the interface, but it also has removed the /etc/systemd/system/NetworkManager.service.d/override.conf, then how does dhclient run?

Also why does centos 8 use NetworkManager to manage the interface, but centos 8-Stream cannot?

1 Like

The fix allows NetworkManager to manage it by setting unmanaged to zero.

Centos 8 stream does use network manager.

Does running dhclient manually in that veth container work?

Also can you show the lxd config for the problem container

Gotcha. My bad, my brain was reading NM_UNMANAGED as NM_MANAGED :upside_down_face:

Yes running dhclient in the veth container does work.

LXC config:

$ lxc config show c8s
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Centos 8-Stream amd64 (20201223_07:08)
  image.os: Centos
  image.release: 8-Stream
  image.serial: "20201223_07:08"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 085d069ec394fb38a1da0a5c7a6ded6e40c00b5a38d307188f27bf5867717e28
  volatile.eth0.host_name: veth30f0bb22
  volatile.eth0.hwaddr: 00:16:3e:54:6c:4f
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: c9c85c47-9646-455e-bb83-f3540cfc4061
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""

$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/c8s

$ lxc network show lxdbr0
config:
  ipv4.address: 10.244.216.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:5ef8:c685:6f25::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/c8s
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

On the container:

[root@c8s ~]# journalctl -t NetworkManager
-- Logs begin at Wed 2020-12-23 23:29:53 UTC, end at Wed 2020-12-23 23:29:54 UTC. --
Dec 23 23:29:53 c8s NetworkManager[55]: <info>  [1608766193.4154] NetworkManager (version 1.30.0-0.3.el8) is starting... (for the first time)
Dec 23 23:29:53 c8s NetworkManager[55]: <info>  [1608766193.4160] Read config: /etc/NetworkManager/NetworkManager.conf
Dec 23 23:29:53 c8s NetworkManager[55]: <info>  [1608766193.4185] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 23 23:29:53 c8s NetworkManager[55]: <info>  [1608766193.4223] manager[0x562c2bb930b0]: monitoring kernel firmware directory '/lib/firmware'.
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0295] hostname: hostname: using hostnamed
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0300] dns-mgr[0x562c2bb88130]: init: dns=default,systemd-resolved rc-manager=symlink
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0339] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0340] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0340] manager: Networking is enabled by state file
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0341] dhcp-init: Using DHCP client 'internal'
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0350] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.30.0-0.3.el8/libnm-s
ettings-plugin-ifcfg-rh.so")
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0350] settings: Loaded settings plugin: keyfile (internal)
Dec 23 23:29:54 c8s NetworkManager[55]: <warn>  [1608766194.0359] ifcfg-rh:     invalid MTU ''
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0376] device (lo): carrier: link connected
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0379] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1)
Dec 23 23:29:54 c8s NetworkManager[55]: <info>  [1608766194.0386] manager: (eth0): new Veth device (/org/freedesktop/NetworkManager/Devices/2)

[root@c8s ~]# nmcli dev
DEVICE  TYPE      STATE      CONNECTION
eth0    ethernet  unmanaged  --
lo      loopback  unmanaged  --
[root@c8s ~]# nmcli dev set eth0 managed yes ; nmcli dev
DEVICE  TYPE      STATE      CONNECTION
eth0    ethernet  unmanaged  --
lo      loopback  unmanaged  --
[root@c8s ~]# nmcli dev connect eth0
Error: Failed to add/activate new connection: veth.peer: property is not specified

So its still showing as unmanaged which is the issue.

Is this a fresh container? Does launching a new one fix it?

Yes itā€™s a fresh container, fresh image. Launching a new one does not fix it.

Also tried launching with the --vm flag, confirmed this is not a problem for the virtual machine, only a problem for the container.

What is your host OS and version? And any errors in sudo dmesg?

Ah ha, I have two different hosts both are Ubuntu 18.04.5 LTS x86_64 and both have this problem. When I start the container the following appears in dmesg:

[955972.928686] IPv6: ADDRCONF(NETDEV_UP): vethcb994778: link is not ready
[955972.944760] lxdbr0: port 1(vethcb994778) entered blocking state
[955972.944762] lxdbr0: port 1(vethcb994778) entered disabled state
[955972.944968] device vethcb994778 entered promiscuous mode
[955973.048693] audit: type=1400 audit(1609119371.928:2300): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-c8s_</var/snap/lxd/common/lxd>" pid=5781 comm="apparmor_parser"
[955973.128220] eth0: renamed from veth592de1ea
[955973.147609] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[955973.148032] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[955973.148070] lxdbr0: port 1(vethcb994778) entered blocking state
[955973.148071] lxdbr0: port 1(vethcb994778) entered forwarding state
[955974.867522] lxdbr0: port 1(vethcb994778) entered disabled state
[955976.266822] audit: type=1400 audit(1609119375.144:2301): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-c8s_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/dev/" pid=6286 comm="(ostnamed)" flags="ro, nosuid, noexec, remount, strictatime"

However I just tried on my laptop which is Ubuntu 20.04.1 LTS x86_64 and a container launched there is assigned an IP address fine. The following appears in dmesg:

[124842.768557] lxdbr0: port 1(veth3c18d640) entered blocking state
[124842.768561] lxdbr0: port 1(veth3c18d640) entered disabled state
[124842.772492] device veth3c18d640 entered promiscuous mode
[124842.863290] audit: type=1400 audit(1609119757.694:145): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-c8s_</var/snap/lxd/common/lxd>" pid=234848 comm="apparmor_parser"
[124842.956228] eth0: renamed from veth3e89379f
[124842.984584] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[124842.984645] lxdbr0: port 1(veth3c18d640) entered blocking state
[124842.984647] lxdbr0: port 1(veth3c18d640) entered forwarding state
[124843.778981] lxdbr0: port 1(veth3c18d640) entered disabled state
[124843.925182] audit: type=1400 audit(1609119758.754:146): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-c8s_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/dev/" pid=234986 comm="(ostnamed)" flags="ro, nosuid, noexec, remount, strictatime"
[124844.238265] lxdbr0: port 1(veth3c18d640) entered blocking state
[124844.238268] lxdbr0: port 1(veth3c18d640) entered forwarding state

On the Ubuntu 18 host if I set the container to be privileged:

$ lxc config set c8s security.privileged true

ā€¦ then it is assigned an IP. In that case dmesg output is:

[2217956.819225] IPv6: ADDRCONF(NETDEV_UP): veth3d021e32: link is not ready
[2217956.827173] lxdbr0: port 1(veth3d021e32) entered blocking state
[2217956.827174] lxdbr0: port 1(veth3d021e32) entered disabled state
[2217956.827270] device veth3d021e32 entered promiscuous mode
[2217957.049013] audit: type=1400 audit(1609153881.377:4082106): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-c8s_</var/snap/lxd/common/lxd>" pid=17857 comm="apparmor_parser"
[2217957.238489] eth0: renamed from veth2c22795c
[2217957.275133] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[2217957.277116] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[2217957.277152] lxdbr0: port 1(veth3d021e32) entered blocking state
[2217957.277154] lxdbr0: port 1(veth3d021e32) entered forwarding state
[2217958.277081] lxdbr0: port 1(veth3d021e32) entered disabled state
[2217958.384803] audit: type=1400 audit(1609153882.713:4082107): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-c8s_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/" pid=18149 comm="(kManager)" srcname="/" flags="rw, rbind"
[2217958.979792] audit: type=1400 audit(1609153883.309:4082108): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-c8s_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/" pid=18240 comm="(ostnamed)" srcname="/" flags="rw, rbind"
[2217959.279671] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[2217959.279793] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[2217959.279806] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[2217959.279846] lxdbr0: port 1(veth3d021e32) entered blocking state
[2217959.279848] lxdbr0: port 1(veth3d021e32) entered forwarding state