Managed Bridge DHCP stopped working since move to core20 snap

Hello,

Today after reboot of a box, the dhcp server on the lxdbr0 interface stopped working. But, the containers using the br0 bridge are getting their ips.

If I manually set ip address in the containers, they get proper network access.

# lxc network list
+--------+----------+---------+----------------+---------------------------+-------------+---------+
|  NAME  |   TYPE   | MANAGED |      IPV4      |           IPV6            | DESCRIPTION | USED BY |
+--------+----------+---------+----------------+---------------------------+-------------+---------+
| br0    | bridge   | NO      |                |                           |             | 4       |
+--------+----------+---------+----------------+---------------------------+-------------+---------+
| enp2s0 | physical | NO      |                |                           |             | 0       |
+--------+----------+---------+----------------+---------------------------+-------------+---------+
| enp3s0 | physical | NO      |                |                           |             | 0       |
+--------+----------+---------+----------------+---------------------------+-------------+---------+
| lxdbr0 | bridge   | YES     | 10.39.199.1/24 | fd42:5620:161d:1d2b::1/64 |             | 10      |
+--------+----------+---------+----------------+---------------------------+-------------+---------+
| virbr0 | bridge   | NO      |                |                           |             | 0       |
+--------+----------+---------+----------------+---------------------------+-------------+---------+
# lxc profile show default
lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/dns
- /1.0/instances/front
- /1.0/instances/mqtt
- /1.0/instances/database
- /1.0/instances/monit
- /1.0/instances/cuddly-werewolf
# lxc config show cuddly-werewolf 
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20210610)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20210610"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: 9ba1aa2f5ddea5f6b239cb1c05af8e4482c7c252e2d95dafc32686e80af5e884
  volatile.eth0.host_name: vethe0a1553d
  volatile.eth0.hwaddr: 00:16:3e:e0:b9:4c
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 6d775551-0d69-4bac-8596-807966e9eadb
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""
# lxc network show lxdbr0
config:
  ipv4.address: 10.39.199.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:5620:161d:1d2b::1/64
  ipv6.nat: "true"
  raw.dnsmasq: |
    auth-zone=lxd
    dns-loop-detect
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/cuddly-werewolf
- /1.0/instances/database
- /1.0/instances/dns
- /1.0/instances/front
- /1.0/instances/hass
- /1.0/instances/monit
- /1.0/instances/mqtt
- /1.0/profiles/app-local-network
- /1.0/profiles/default
- /1.0/profiles/dual-nic
managed: true
status: Created
locations:
- none

# cat /var/snap/lxd/common/lxd/logs/lxd.log
t=2021-06-17T20:27:57+1100 lvl=info msg="LXD 4.15 is starting in normal mode" path=/var/snap/lxd/common/lxd
t=2021-06-17T20:27:57+1100 lvl=info msg="Kernel uid/gid map:" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - u 0 0 4294967295" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - g 0 0 4294967295" 
t=2021-06-17T20:27:57+1100 lvl=info msg="Configured LXD uid/gid map:" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - u 0 1000000 1000000000" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - g 0 1000000 1000000000" 
t=2021-06-17T20:27:57+1100 lvl=info msg="Kernel features:" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - closing multiple file descriptors efficiently: no" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - netnsid-based network retrieval: yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - pidfds: yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - uevent injection: yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - seccomp listener: yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - seccomp listener continue syscalls: yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - seccomp listener add file descriptors: no" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - attach to namespaces via pidfds: yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - safe native terminal allocation : yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - unprivileged file capabilities: yes" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - cgroup layout: hybrid" 
t=2021-06-17T20:27:57+1100 lvl=warn msg=" - Couldn't find the CGroup blkio.weight, disk priority will be ignored" 
t=2021-06-17T20:27:57+1100 lvl=info msg=" - shiftfs support: disabled" 
t=2021-06-17T20:27:57+1100 lvl=info msg="Initializing local database" 
t=2021-06-17T20:27:57+1100 lvl=info msg="Set client certificate to server certificate 0879c2eb0f3a749e0ad41b48484139401f0927290a812545e8e131aa80922a7c" 
t=2021-06-17T20:27:58+1100 lvl=info msg="Starting /dev/lxd handler:" 
t=2021-06-17T20:27:58+1100 lvl=info msg=" - binding devlxd socket" socket=/var/snap/lxd/common/lxd/devlxd/sock
t=2021-06-17T20:27:58+1100 lvl=info msg="REST API daemon:" 
t=2021-06-17T20:27:58+1100 lvl=info msg=" - binding Unix socket" inherited=true socket=/var/snap/lxd/common/lxd/unix.socket
t=2021-06-17T20:27:58+1100 lvl=info msg="Initializing global database" 
t=2021-06-17T20:27:58+1100 lvl=info msg="Firewall loaded driver \"nftables\"" 
t=2021-06-17T20:27:58+1100 lvl=info msg="Initializing storage pools" 
t=2021-06-17T20:27:59+1100 lvl=info msg="Initializing daemon storage mounts" 
t=2021-06-17T20:27:59+1100 lvl=info msg="Initializing networks" 
t=2021-06-17T20:28:00+1100 lvl=warn msg="Skipping AppArmor for dnsmasq due to raw.dnsmasq being set" driver=bridge name=lxdbr0 network=lxdbr0 project=default
t=2021-06-17T20:28:00+1100 lvl=info msg="Pruning leftover image files" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Done pruning leftover image files" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Loading daemon configuration" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Started seccomp handler" path=/var/snap/lxd/common/lxd/seccomp.socket
t=2021-06-17T20:28:00+1100 lvl=info msg="Pruning expired images" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Done pruning expired images" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Pruning expired instance backups" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Done pruning expired instance backups" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Expiring log files" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Done expiring log files" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Pruning resolved warnings" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Done pruning resolved warnings" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Updating instance types" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Updating images" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Done updating instance types" 
t=2021-06-17T20:28:00+1100 lvl=info msg="Done updating images" 
t=2021-06-17T20:28:01+1100 lvl=info msg="Starting container" action=start created=2021-06-17T20:05:33+1100 ephemeral=false instance=cuddly-werewolf instanceType=container project=default stateful=false used=2021-06-17T20:13:17+1100
t=2021-06-17T20:28:01+1100 lvl=info msg="Started container" action=start created=2021-06-17T20:05:33+1100 ephemeral=false instance=cuddly-werewolf instanceType=container project=default stateful=false used=2021-06-17T20:13:17+1100

Duplicate of Containers suddenly stopped working - No more IP's assigned

See

Thanks I cleared raw.dnsmasq with lxc network unset lxdbr0 raw.dnsmasq

I also use ufw and I had to ufw allow in on lxdbr0. So all good now, but I don’t get why it was working before I rebooted that box earlier today.

@Lox most likely because in modifying the lxdbr0 network’s raw.dnsmasq setting this would have caused LXD to remove and re-add its firewall rules, potentially changing the order of the rules in relation to another ruleset that is normally added after LXD has started.

See Lxd bridge doesn't work with IPv4 and UFW with nftables - #17 by tomp for a more thorough example.

1 Like