No IPv4 on Arch Linux containers

Since a day or few ago, my Arch linux containers stopped receiving IPv4 addresses, but IPv6 works just fine.

Current Ubuntu works fine.

Setting the Arch container as privileged makes IPv4 work.

Tested with:
lxc launch images:archlinux test-arch
lxc launch ubuntu:19.10 test-ubuntu
lxc list

[root@test-arch ~]# systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● sys-kernel-config.mount loaded failed failed Kernel Configuration File System
● systemd-journald-audit.socket loaded failed failed Journal Audit Socket

[root@test-arch ~]# journalctl -u systemd-networkd
– Logs begin at Thu 2019-12-19 10:58:44 UTC, end at Thu 2019-12-19 10:58:45 UTC. –
Dec 19 10:58:44 test-arch systemd[1]: Starting Network Service…
Dec 19 10:58:44 test-arch systemd-networkd[60]: Enumeration completed
Dec 19 10:58:44 test-arch systemd[1]: Started Network Service.
Dec 19 10:58:45 test-arch systemd-networkd[60]: eth0: Gained IPv6LL

[0] % lxc console --show-log test-arch
https://haste.rys.pw/raw/eqadacivew

I’m assuming you’re assigning via DHCP?

Can you manually add a static IPv4 inside the container with ip addr add 10.4.0.2/24 dev eth0 (Or whatever subnet you’re using?)

I am indeed using DHCP.

Manually adding the IP and default route gets the network working.

ip addr a 192.168.1.21/24 dev eth0
ip route add default via 192.168.1.1

Possibly related to this?: https://github.com/lxc/lxc/issues/3228

I’m not sure, we’ve also been seeing this behavior, only on unprivileged archlinux in our automated CI for the past two days or so.

Feels like something has changed in Arch.

Unlikely, that issue concerns commits from 9 days ago, I’m running LXD 3.18 which was released in October

https://jenkins.linuxcontainers.org/job/lxd-test-images/ shows all green on the 18th, and failure since yesterday even after multiple re-tries, the failing test is ipv4 networking on archlinux.

Given we didn’t change anything to LXD during that time period, my guess is that something changed in Arch. Possibly systemd/networkd related?

Last systemd change was on the 15th, so unless the CI was using outdated mirrors it shouldn’t be that.

https://git.archlinux.org/svntogit/packages.git/log/trunk?h=packages/systemd

Kernel was changed 2 days ago however, I’ll try and see.

https://git.archlinux.org/svntogit/packages.git/log/trunk?h=packages/linux

EDIT: Downgrading to 5.4.3 did not help.

This issue is caused by the systemd 244.1 package in Arch Linux. Other users have the same issue. When installing systemd 244 the issue disappears. Let’s hope this gets fixed soon.

2 Likes

What you can do in the meantime is use the old systemd 244 package from the Arch Linux Archive. Just download the packages systemd, systemd-libs and systemd-sysvcompat, and install them using pacman -U <pkg>.

images:fedora/31/amd64 has the same issue, images:fedora/30/amd64 seems to get an IPv4 address just fine.

Yeah, we’re now seeing the issue on:

  • alt/sisyphus
  • archlinux
  • fedora/31

I suspect they’re all on the new systemd. Someone will have to bisect that systemd release to track down what’s going on and ideally get upstream to fix the regression.

@brauner (FYI). I don’t think we have quite enough to go to Lennart yet, but there’s clearly something different in 244.1 that’s hitting us.

1 Like

Thanks for reporting this. My host is based on Arch Linux, do I need the older version of systemd on the host too?

@stgraber - You can add OpenSUSE Tumbleweed to the list.

Was clear as of yesterday’s test, did they update systemd yesterday/today?

Host version shouldn’t matter, only container version should.

Can confirm, same issue on Chrome OS 79 / Crostini with Arch Linux guest and systemd version above systemd 244 (244-1-arch).

Apologies, I thought the host and container issues were starting to come together - maybe not. My comment was strictly in reference to the host. Containers on my Tumbleweed host (systemd 243) on 5.3.12-1 are failing to get an IPv4 address over an Linux bridge. There were a few other reports of similar behavior on the forum, in addition to the Github issue opened that I linked earlier in this thread: LXD Container not getting ip address from DHCP using linux bridge, Container do not get IP addresses after a reboot — or internet connection

@parm

I find dubious that Fedora 31 could have the very same issue. Not getting an IP address can come from dozen of reasons, and Fedora 31 does definitely not run systemd 2.44.1, rather 2.43 with Fedora sauce like all ‘stable’ distros.

Can you try on your setup to edit the container config to add:

raw.lxc: lxc.mount.auto = proc:rw sys:ro

and then restart it, of course

@gpatel-fr

Adding that manual config override did the job, container has an IP now. Thanks!

Could there be any isolation issues with that override? Or should this override be part of default config?