No networking in containers, missing dnsmasq? snap on debian

Please start a broken container and show the output of:

ip a and bridge link show, lxc config show <instance> on the host.

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 24:4b:fe:0c:5c:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.239/24 brd 192.168.1.255 scope global dynamic noprefixroute enp5s0
valid_lft 86332sec preferred_lft 86332sec
inet6 fe80::264b:feff:fe0c:5c09/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: lxdbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:ba:47:ea brd ff:ff:ff:ff:ff:ff
inet 10.4.101.1/24 scope global lxdbr1
valid_lft forever preferred_lft forever
inet6 fd42:9c75:2e7d:6b8::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:feba:47ea/64 scope link
valid_lft forever preferred_lft forever
5: veth25bd57c2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr1 state UP group default qlen 1000
link/ether a6:ac:9c:96:6d:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0

lxc config show Caixa
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 16.04 LTS amd64 (release) (20200708)
image.label: release
image.os: ubuntu
image.release: xenial
image.serial: “20200708”
image.type: squashfs
image.version: “16.04”
volatile.base_image: 6be2a8c660ebaf93a5ea10b2313edc0804375c44bfcb39c76c45900fe1647376
volatile.eth0.host_name: veth25bd57c2
volatile.eth0.hwaddr: 00:16:3e:07:c0:74
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
volatile.uuid: 6ecebd5d-221d-49b7-b039-c7329ff2bd18
devices:
Downloads:
path: /home/ubuntu/Downloads
source: /home/alex/Downloads
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”

bridge link show
5: veth25bd57c2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr1 state forwarding priority 32 cost 2

I can’t figure why this time lxdbr1 was UP. No other container was running.

Its up because veth25bd57c2 is connected to it, and the other end of veth25bd57c2 will be inside the container Caixa, you can see from the volatile.eth0.host_name: veth25bd57c2 entry that it all lines up.

That’s right about veth. Instead, I still have no idea on how to fix this network issue.

The difficulty I’m having is that I can’t get a clear picture of what is going on on your machine or what config you have set, as each time I ask it appears to have changed.

  • Initially we were talking about lxdbr0, now we are talking about lxdbr1.
  • Sometimes lxdbr1 is down with running instances, but other times its up with running instances.

Unless we stablise the config and get a clear reproducer of the issue along with a snapshot recording of the interface state and dnsmasq listeners at the time, along with a tcpdump of the traffic on the bridge at the time the instance was starting (in order to capture the DHCP request and confirm its actually happening) then I am not going to be able to advise further.

I understand. I’m sorry because I’ve messed with those settings while trying to get your help. Furthermore, I wasted to much of your precious time, for which I want thank you so much. I think the best solution in this case will be to migrate old containers to new ones.

1 Like

Sounds good.

It could be that something on your system (NetworkManager perhaps) is trying to manage the lxdbr1 interface and is interfering with LXD’s operation of it.