Shutting down container brings host network interface down

Hello! I have been trying to run Alpine (edge) containers on Debian (bookworm unstable, kernel 6.1.0, lxd/lxc version 5.0.2), but every time I run lxc stop or lxc restart, the host’s network adapter goes down, and my SSH connection hangs.

Based on this discussion, I’ve tried statically defining the bridge.hwaddr for the lxdbr0 bridge config

config:
  bridge.hwaddr: 02:3a:f6:d5:08:f1
  ipv4.address: 10.128.37.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:fb55:60e2:5f53::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/profiles/default
- /1.0/instances/still-elk
- /1.0/instances/above-cockatoo 
[truncating instance list for brevity]
managed: true
status: Created
locations:
- none

but to no avail.

Based on this discussion, I disabled the Docker userlandproxy, and added the ENV{INTERFACE}=="veth*", ENV{NM_UNMANAGED}="1" udev rule.

After running lxc reboot on one of the containers, the network froze, and I used Linode’s lish to check the kernel logs, which show this, but I couldn’t see anything that stood out to me.

[  103.846824] veth755f10f4: renamed from physpVCCoj
[  103.854632] lxdbr0: port 1(vethc5d309a8) entered disabled state
[  104.019509] device vethc5d309a8 left promiscuous mode
[  104.020028] lxdbr0: port 1(vethc5d309a8) entered disabled state
[  104.749945] kauditd_printk_skb: 7 callbacks suppressed
[  104.749949] audit: type=1400 audit(1677465441.806:61): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-above-cockatoo_</var/lib/lxd>" pid=6774 comm="apparmor_parser"
[  104.782461] lxdbr0: port 1(vethe3381d02) entered blocking state
[  104.783109] lxdbr0: port 1(vethe3381d02) entered disabled state
[  104.783699] device vethe3381d02 entered promiscuous mode
[  104.784269] lxdbr0: port 1(vethe3381d02) entered blocking state
[  104.784840] lxdbr0: port 1(vethe3381d02) entered forwarding state
[  104.862468] lxdbr0: port 1(vethe3381d02) entered disabled state
[  104.872434] audit: type=1400 audit(1677465441.930:62): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-above-cockatoo_</var/lib/lxd>" pid=6807 comm="apparmor_parser"
[  104.927807] physBKzS9V: renamed from veth911ecca6
[  104.938745] eth0: renamed from physBKzS9V
[  104.951534] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  104.952262] lxdbr0: port 1(vethe3381d02) entered blocking state
[  104.952934] lxdbr0: port 1(vethe3381d02) entered forwarding state
[  104.972410] Not activating Mandatory Access Control as /sbin/tomoyo-init does not exist.
[  105.116836] audit: type=1400 audit(1677465442.174:63): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-above-cockatoo_</var/lib/lxd>" name="/dev/" pid=7052 comm="busybox" flags="rw, nosuid, noexec, remount, silent"
[  105.120142] audit: type=1400 audit(1677465442.182:64): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-above-cockatoo_</var/lib/lxd>" name="/dev/" pid=7052 comm="busybox" flags="ro, nosuid, noexec, remount, silent"

Even though “link becomes ready” is shown, I am still unable to ssh back in.

I have no problem starting containers, or using the network within the containers once they are up.

My apologies if this post is a bit unwieldy, I’m still new to the forum!

When everything is up and running and working, please can you show me the output of:

  • ip a
  • ip r
  • bridge link show

From the LXD host.

1 Like

And then from LISH shell, stop the container that causes the problem and re-run the commands so I can see the difference.

1 Like

Hi Thomas, sorry for the late reply:

Going to keep it to diffs since I had a ton of containers configured (stress testing en masse) - all my containers have this issue, so I simply restarted the first one in the list

ip a

--- a/before/ip_a.log
+++ b/after/ip_a.log
@@ -4,14 +4,8 @@
        valid_lft forever preferred_lft forever
     inet6 ::1/128 scope host
        valid_lft forever preferred_lft forever
-2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
+2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
     link/ether f2:3c:93:fd:e7:b6 brd ff:ff:ff:ff:ff:ff
-    inet 198.74.60.17/24 brd 198.74.60.255 scope global eth0
-       valid_lft forever preferred_lft forever
-    inet6 2600:3c03::f03c:93ff:fefd:e7b6/64 scope global dynamic mngtmpaddr
-       valid_lft 5313sec preferred_lft 1713sec
-    inet6 fe80::f03c:93ff:fefd:e7b6/64 scope link
-       valid_lft forever preferred_lft forever
 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
     link/ether 02:42:55:c5:94:9b brd ff:ff:ff:ff:ff:ff
     inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
@@ -26,8 +20,6 @@
        valid_lft forever preferred_lft forever
 6: vethea4a9e08@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
     link/ether 82:1f:36:76:86:df brd ff:ff:ff:ff:ff:ff link-netnsid 0
-8: vetha19d108b@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
-    link/ether ae:9d:b3:32:69:d7 brd ff:ff:ff:ff:ff:ff link-netnsid 1
 10: veth9166c372@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
     link/ether fe:e6:7c:b6:86:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 2
 12: veth0fb85181@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
@@ -98,3 +90,5 @@
     link/ether f6:45:69:0e:c6:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 36
 79: vethbdd8ec11@if78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
     link/ether 8e:8b:95:84:34:6e brd ff:ff:ff:ff:ff:ff link-netnsid 37
+81: veth8508ff0a@if80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
+    link/ether 66:27:31:a5:2b:59 brd ff:ff:ff:ff:ff:ff link-netnsid 1

ip r - seems like this might be the root of the issue!

--- a/before/ip_r.log
+++ b/after/ip_r.log
@@ -1,4 +1,2 @@
-default via 198.74.60.1 dev eth0 onlink
 10.128.37.0/24 dev lxdbr0 proto kernel scope link src 10.128.37.1
 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
-198.74.60.0/24 dev eth0 proto kernel scope link src 198.74.60.17

bridge link show

--- a/before/bridge_link_show.log
+++ b/after/bridge_link_show.log
@@ -1,5 +1,4 @@
 6: vethea4a9e08@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
-8: vetha19d108b@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
 10: veth9166c372@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
 12: veth0fb85181@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
 14: veth551b9ada@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
@@ -35,3 +34,4 @@
 75: vethf5d0a679@if74: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
 77: veth1ef1af23@if76: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
 79: vethbdd8ec11@if78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2
+81: veth8508ff0a@if80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2

I no longer have access to this machine, we can close this thread.