Instance not getting IP address from LXD managed bridge

I have created a bridge network through LXD and attached it to a profile, however it is not receiving an IP address. I also have a non managed bridge network that uses my local network’s DHCP server to assign addresses which works without any issues.

Following is the output of lxc network show lxdbr0:

config:
  ipv4.address: 10.58.44.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:a3f6:e51e:35fa::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/test
- /1.0/profiles/test
managed: true
status: Created
locations:
- none

The test profile:

config:
  user.network-config: |
    version: 1
            config:
              - type: physical
                name: eth0
                subnets:
                  - type: dhcp
                    ipv4: true
description: Test
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
name: test
used_by:
- /1.0/instances/test

And the lxc config show test --expanded of the test container:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20201210)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20201210"
  image.type: squashfs
  image.version: "20.04"
  user.network-config: |
    version: 1
            config:
              - type: physical
                name: eth0
                subnets:
                  - type: dhcp
                    ipv4: true
  volatile.base_image: e0c3495ffd489748aa5151628fa56619e6143958f041223cb4970731ef939cb6
  volatile.eth0.host_name: vethabd69e5b
  volatile.eth0.hwaddr: 00:16:3e:1c:56:ab
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: e2ad9a30-02c9-452a-9d4e-2546687ec2c5
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- test
stateful: false
description: ""

Any tips are appreciated.

Seems the instance is getting IPV6 now but still no IPV4:

+--------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
|     NAME     |  STATE  |         IPV4         |                     IPV6                      |   TYPE    | SNAPSHOTS |
+--------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| test         | RUNNING |                      | fd42:a3f6:e51e:35fa:216:3eff:fe1c:56ab (eth0) | CONTAINER | 0         |
+--------------+---------+----------------------+-----------------------------------------------+-----------+-----------+

See https://linuxcontainers.org/lxd/advanced-guide/#cloud-init
You need to add the #cloud-config line. While it looks like a harmless comment, it is required for the proper parsing of the cloud-init instructions.

You can also check in the container in /etc/netplan/ that your cloud-init instructions have made it successfully into the container. Your configuration looks just like the default in the Ubuntu images, which means you may omit them if you do not need some other extra functionality.

Thank you for the tip, I was actually following your guide here to add a private network alongside my LAN which was very helpful. I was testing out first with the private network only before adding my non managed bridge interface.

I removed the cloud-init config to make sure that lxdbr0 was not the issue:

config: {}
description: Test
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
name: test
used_by:
- /1.0/instances/test

I created the container again however it still does not seem to have an IPV4 address assigned to it:

$ lxc launch ubuntu:20.04 test -p test -s default
$ lxc list
+--------------+---------+----------------------+------+-----------+-----------+
|     NAME     |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------------+---------+----------------------+------+-----------+-----------+
| test         | RUNNING |                      |      | CONTAINER | 0         |
+--------------+---------+----------------------+------+-----------+-----------+

The baseline (the simplest test) is to verify that your container gets an IPv4 address when you launch it as

$ lxc launch ubuntu:20.04 mycontainer

That line is equivalent to lxc launch ubuntu:20.04 mycontainer --profile default. That is, if you do not specify a profile, the default is applied by default.

When you apply the profiles, the order matters. It is better to always put the default profile first, because if there are common elements between profiles, then the subsequent profiles supersede the former. That is, specify your own profile last when you launch containers.

There is a chance that the DHCP server for lxdbr0 is not working. If you perform the above two tests and report back, we will have a good idea whether the issue is with the DNS server on lxdbr0, or on the profile content.

I have created the instance as instructed:

$ lxc launch ubuntu:20.04 test
$ lxc list
+--------------+---------+----------------------+------+-----------+-----------+
|     NAME     |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------------+---------+----------------------+------+-----------+-----------+
| test         | RUNNING |                      |      | CONTAINER | 0         |
+--------------+---------+----------------------+------+-----------+-----------+

Seems the issue is with the DHCP server for lxdbr0.

This is good, in the sense that we narrowed down the issue.

Either LXD’s DHCP server (dnsmasq) for lxdbr0 crashed, or there is another DNS server on your host and that DNS server is listening on lxdbr0 (therefore, LXD’s DNS server lost its chance on this reboot).

To verify which one it is, show the output of the following. The command will show which processes are listening on port 53 (domain), and on which network interface.

sudo ss -pluna '( sport = :domain )'

After further testing, it seems to be a firewall issue rather than DHCP. Since I’m using a pre-production server, the default rule for the INPUT chain was drop. Setting it to accept temporarily assigns the IPV4 address to the container successfully.

What is the specific input source address from the LXD managed DHCP server?

Following is the requested output:

$ sudo ss -pluna '( sport = :domain )'
State     Recv-Q    Send-Q                  Local Address:Port       Peer Address:Port    Process                                       
UNCONN    0         0                          10.58.44.1:53              0.0.0.0:*        users:(("dnsmasq",pid=4042610,fd=8))         
UNCONN    0         0                       127.0.0.53%lo:53              0.0.0.0:*        users:(("systemd-resolve",pid=930,fd=12))    
UNCONN    0         0            [fd42:a3f6:e51e:35fa::1]:53                 [::]:*        users:(("dnsmasq",pid=4042610,fd=10)) 

I resolved the issue by adding a firewall rule on the INPUT chain for the lxdbr0 interface to ACCEPT all connections from source and destination ports 67 and 68.