/bin/udevadm should be part of the udev package. And yes, I think udevadm is necessary for systemd-networkd to work.
Yes, I noticed this when I listed the package contents. My interpretation of your previous comment was that they were separate.
Hello,
Sorry for my late reply, I just gave up. I’ll see when Debian will be decided.
Debian stable / bullseye has now received the necessary update+fix for (1) in systemd v247.3-7 and this package now works for me with LXD + systemd-networkd, too.
Summary again of how to switch to systemd-networkd in an LXD container:
root$ lxc launch images:debian/bullseye debian-bullseye-systemd-test
root$ lxc exec debian-bullseye-systemd-test -- /bin/bash
$ apt-get update
$ apt-get dist-upgrade
[ make sure you have systemd v247.3-7 or later ]
$ apt-get install udev
$ systemctl unmask systemd-networkd
$ systemctl unmask systemd-networkd.socket
$ systemctl unmask systemd-networkd-wait-online.service
$ systemctl enable systemd-networkd
$ systemctl enable systemd-networkd.socket
$ systemctl enable systemd-networkd-wait-online.service
$ systemctl daemon-reload
$ mv /etc/network/interfaces /etc/network/interfaces.save
$ cat > /etc/systemd/network/eth0.network << EOF
[Match]
Name=eth0
[Network]
DHCP=true
EOF
$ networkctl reload
($ systemctl restart systemd-networkd)
$ reboot
@monstermunchkin is this worth using in our images by default?
I don’t see why not.
I don’t know if it can help, but I deployed a container with lxc launch images:debian/11
last week, and I had the same problem : no IPv4 in container.
LXD 4.24-22170 on a host with Debian 11.
I just made what @T_X said on his last post, and it now works.
I saw that systemd v247.3-7 was already in the image, and udev too.
Edit : older container, launched a long time ago with lxc launch images:debian/10
had no problem with their network / DHCP.
@monstermunchkin any ideas?
There was a network issue with the image last week, but they have been fixed and work perfectly. I just launched a debian/11 container and it works just fine.
Hi everybody.
I made a new launch this morning, on another host and the issue is still here for me.
This host is a Debian 11 with LXD 4.24 rev 22710 (not in a cluster)
lxc image info images:debian/11
Fingerprint: 313492ff825913816ae8357588421203e0b969ca671d92e677c0601cc91c23a2
Size: 83.29MB
Architecture: x86_64
Type: container
Public: yes
Timestamps:
Created: 2022/04/12 00:00 UTC
Uploaded: 2022/04/12 00:00 UTC
Expires: never
Last used: never
Properties:
variant: default
os: Debian
release: bullseye
architecture: amd64
serial: 20220412_05:24
description: Debian bullseye amd64 (20220412_05:24)
type: squashfs
Aliases:
- debian/bullseye/default
- debian/bullseye/default/amd64
- debian/11/default
- debian/11/default/amd64
- debian/bullseye
- debian/bullseye/amd64
- debian/11
- debian/11/amd64
Cached: no
Auto update: disabled
Profiles: []
lxc launch images:debian/11 test
lxc ls test
+------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| test | RUNNING | | | CONTAINER | 0 |
+------+---------+------+------+-----------+-----------+
lxc exec test -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
39: eth0@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:3f:63:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::216:3eff:fe3f:6321/64 scope link
valid_lft forever preferred_lft forever
lxc config show test
architecture: x86_64
config:
image.architecture: amd64
image.description: Debian bullseye amd64 (20220412_05:24)
image.os: Debian
image.release: bullseye
image.serial: "20220412_05:24"
image.type: squashfs
image.variant: default
volatile.base_image: 313492ff825913816ae8357588421203e0b969ca671d92e677c0601cc91c23a2
volatile.eno3.host_name: veth1335cbbb
volatile.eno3.hwaddr: 00:16:3e:3f:63:21
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
volatile.uuid: 726b38c9-0cf4-49cb-a8ec-a4679c88c3cc
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""
lxc profile show default
config: {}
description: Default LXD profile
devices:
eno3:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:
- /1.0/instances/test
[...]
lxc network show lxdbr0
config:
ipv4.address: 10.167.139.1/24
ipv4.nat: "true"
ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/test
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
netstat -lpun | grep \:53
udp 0 0 10.167.139.1:53 0.0.0.0:* 384833/dnsmasq
udp6 0 0 fe80::216:3eff:fe00::53 :::* 384833/dnsmasq
Good to know : if i launch debian/10, network is up
That’s odd. My host is Ubuntu 22.04, and I tried with both LXD 4.24 and LXD 5.0. The debian/11 image always gets an IPv4 address.
Could you check lxc exec test -- systemctl --failed
? I suspect that systemd-networkd
is failing for some reason. That would explain the missing IPv4 address.
lxc exec test -- systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
0 loaded units listed.
Can you check whether systemd-networkd
is actually running?
root@test:~# systemctl status systemd-networkd
● systemd-networkd.service - Network Service
Loaded: loaded (/lib/systemd/system/systemd-networkd.service; disabled; vendor preset: enabled)
Drop-In: /run/systemd/system/service.d
└─zzz-lxc-service.conf
Active: active (running) since Tue 2022-04-12 07:09:46 UTC; 1h 58min ago
TriggeredBy: ● systemd-networkd.socket
Docs: man:systemd-networkd.service(8)
Main PID: 84 (systemd-network)
Status: "Processing requests..."
Tasks: 1 (limit: 38357)
Memory: 4.3M
CPU: 101ms
CGroup: /system.slice/systemd-networkd.service
└─84 /lib/systemd/systemd-networkd
Apr 12 07:09:46 test systemd[1]: Starting Network Service...
Apr 12 07:09:46 test systemd-networkd[84]: Failed to increase receive buffer size for general netlink socket, ignoring: Operation not permitted
Apr 12 07:09:46 test systemd-networkd[84]: Failed to increase buffer size for device monitor, ignoring: Operation not permitted
Apr 12 07:09:46 test systemd-networkd[84]: eth0: Gained IPv6LL
Apr 12 07:09:46 test systemd-networkd[84]: Enumeration completed
Apr 12 07:09:46 test systemd[1]: Started Network Service.
@gqdc I’m out of ideas. You say the host is Debian 11? I’ll try and reproduce the issue in a debian 11 VM.
I’ll leave the test container as is, so do not hesitate to ask me to give you more informations, or try somethings.
I tried running a Debian 11 container in a Debian 11 VM and it works as expected. There was only one difference I noticed:
root@c1:~# systemctl status systemd-networkd
● systemd-networkd.service - Network Service
Loaded: loaded (/lib/systemd/system/systemd-networkd.service; disabled; vendor preset: enabled)
Drop-In: /run/systemd/system/service.d
└─zzz-lxc-service.conf
Active: active (running) since Tue 2022-04-19 08:04:25 UTC; 3min 19s ago
TriggeredBy: ● systemd-networkd.socket
Docs: man:systemd-networkd.service(8)
Main PID: 85 (systemd-network)
Status: "Processing requests..."
Tasks: 1 (limit: 1129)
Memory: 1.0M
CPU: 54ms
CGroup: /system.slice/systemd-networkd.service
└─85 /lib/systemd/systemd-networkd
Apr 19 08:04:25 c1 systemd[1]: Starting Network Service...
Apr 19 08:04:25 c1 systemd-networkd[85]: Failed to increase receive buffer size for general netlink socket, ignoring: Operation not permitted
Apr 19 08:04:25 c1 systemd-networkd[85]: Failed to increase buffer size for device monitor, ignoring: Operation not permitted
Apr 19 08:04:25 c1 systemd-networkd[85]: eth0: Gained IPv6LL
Apr 19 08:04:25 c1 systemd-networkd[85]: Enumeration completed
Apr 19 08:04:25 c1 systemd[1]: Started Network Service.
Apr 19 08:04:26 c1 systemd-networkd[85]: eth0: DHCPv4 address 10.106.166.94/24 via 10.106.166.1
The last line was missing in your output. Is that correct? So for some reason, DHCP is not working for you.
Okay, thanks for your test. I do not have the last line. Maybe something is weird on my hosts.
I’ll install LXD 5.0 from scratch on a new host, I think everything will be right.
Edit : I created a container in a new LXD 5.0 server, and network is all right
I have this problem when the host is Debian 10 and the container is Debian 11.
Just installed a fresh copy of Debian 10 in a VM and I could reproduce the problem there too.
All the other containers (Ubuntu 22.04, Debian 10, etc…) got an IP4 address, only Debian 11 didn’t.
root@bullseye:~# systemctl status systemd-networkd
● systemd-networkd.service - Network Service
Loaded: loaded (/lib/systemd/system/systemd-networkd.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/system/service.d
└─zzz-lxc-service.conf
Active: active (running) since Mon 2022-05-02 14:48:03 UTC; 21min ago
TriggeredBy: ● systemd-networkd.socket
Docs: man:systemd-networkd.service(8)
Main PID: 69 (systemd-network)
Status: "Processing requests..."
Tasks: 1 (limit: 4669)
Memory: 1.3M
CGroup: /system.slice/systemd-networkd.service
└─69 /lib/systemd/systemd-networkd
May 02 14:48:03 bullseye systemd[1]: Starting Network Service...
May 02 14:48:03 bullseye systemd-networkd[69]: Failed to increase receive buffer size for general netlink socket, ignoring: Operation not permitted
May 02 14:48:03 bullseye systemd-networkd[69]: Failed to increase buffer size for device monitor, ignoring: Operation not permitted
May 02 14:48:03 bullseye systemd-networkd[69]: Enumeration completed
May 02 14:48:03 bullseye systemd[1]: Started Network Service.
May 02 14:48:05 bullseye systemd-networkd[69]: eth0: Gained IPv6LL
There is an error in the dmesg when the container starts:
[ 1959.515677] IPv6: ADDRCONF(NETDEV_UP): vethea2c1c1e: link is not ready
[ 1959.549010] lxdbr0: port 1(vethea2c1c1e) entered blocking state
[ 1959.549012] lxdbr0: port 1(vethea2c1c1e) entered disabled state
[ 1959.549065] device vethea2c1c1e entered promiscuous mode
[ 1959.697970] audit: type=1400 audit(1651502883.247:84): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-bullseye_</var/snap/lxd/common/lxd>" pid=5334 comm="apparmor_parser"
[ 1959.772695] physa0asry: renamed from vethc6dc4ece
[ 1959.785669] eth0: renamed from physa0asry
[ 1959.800572] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1959.804561] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1959.804653] lxdbr0: port 1(vethea2c1c1e) entered blocking state
[ 1959.804658] lxdbr0: port 1(vethea2c1c1e) entered forwarding state
[ 1959.881075] audit: type=1400 audit(1651502883.431:85): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-bullseye_</var/snap/lxd/common/lxd>" name="/sys/fs/cgroup/" pid=5335 comm="systemd" fstype="cgroup2" srcname="cgroup2" flags="rw, nosuid, nodev, noexec"
[ 1959.881160] audit: type=1400 audit(1651502883.431:86): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-bullseye_</var/snap/lxd/common/lxd>" name="/sys/fs/cgroup/" pid=5335 comm="systemd" fstype="cgroup2" srcname="cgroup2" flags="rw, nosuid, nodev, noexec"
[ 1959.881221] audit: type=1400 audit(1651502883.431:87): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-bullseye_</var/snap/lxd/common/lxd>" name="/sys/fs/cgroup/" pid=5335 comm="systemd" fstype="cgroup2" srcname="cgroup2" flags="rw, nosuid, nodev, noexec"
Continuing the discussion from Systemd-networkd: not working in Debian Sid or Bullseye images:
I have same problem with a Ubuntu Jammy host and a Bullseye Debian container.
Firstly, I met a problem about AppArmor for all systemd-networkd service. I bypassed by using lxc config security nested to true (which seem harmless for unprivileged container according to stgraber).
Then, systemd-networkd service still fails (without error) to initialize my container bridged ethernet ipv4 interface but ipv6 works well (serivice debug log : Failed to increase receive buffer size/buffer…). So I used an alternate iproute command to init ipv4 interface (without that there is no internet network available into my container).
I’m not agree with your test results because udev installation do not fix it. Besides, I have systemd version v247.3-7 and problem is still present…
Do you advise me to work with a Buster Debian container instead ?