Lxdbr0 down in both ubuntu 20.04 and centos7 on virgin install

I’m not sure if this is a LXD4.8 issue or the fact that I’m attempting to use LXD inside a VMWare Fusion virtual machine.

either way I’m seeing…
3: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:5f:e8:f5 brd ff:ff:ff:ff:ff:ff
inet 10.69.114.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:dcd9:5a35:cf8::1/64 scope global
valid_lft forever preferred_lft forever

Now way to bring the lxdbr0 up…

Anyone any idea how to fix this?

What happens when you do ip link set lxdbr0 up?

#ip link set lxdbr0 up

Responds with a 0 status but nothing has changed…
/var/logs/message indicates…

Nov 21 15:46:14 home kernel: lxdbr0: port 1(veth726a8ded) entered disabled state

Nov 21 15:46:14 home kernel: lxdbr0: port 1(veth726a8ded) entered blocking state

Nov 21 15:46:14 home kernel: lxdbr0: port 1(veth726a8ded) entered forwarding state

Nov 21 15:46:14 home NetworkManager[986]: [1605973574.2987] device (lxdbr0): carrier: link connected

Nov 21 15:46:14 home kernel: lxdbr0: port 1(veth726a8ded) entered disabled state

Nov 21 15:46:14 home NetworkManager[986]: [1605973574.3456] device (veth726a8ded): released from master device lxdbr0

So that’s pretty unlikely to be a LXD issue then. LXD would have effectively done that, if the kernel then flips it back to down, that suggests something else on your system is messing with it.

It could be NetworkManager as it appears to be reacting to things in this case, but it could be something else.

That’s what I thought I was running Centos 7 and it didn’t seem quite right. I rebooted and suddenly none of the containers had DNS resolution.

So I created a Ubuntu 20.04 install which I’ve used more recently and it did exactly the same on a virgin install.

Which means I think it’s probably something connected to VMware. I don’t see any of the release notes mention networking.

So maybe the issue is that VMware creates a bridge and the LXD bridge isn’t able to nest ?

OK so I switched to simplified devices and disabled NetworkManger and its starting to work reliably again.

So nothing todo with LXD though perhaps its the NetworkMangers integration with a VMWare bridged device. I’ve not seen this problem on bare metal Unix nodes with exactly the same OS versions under NetworkManger.