Booting container with bridge interface makes host unreachable

Hello, I was trying to setup a quick 2 nodes cluster using 2 bare-metal install of ubuntu 20.04 when I faced this issue.

I configured a bridge br0 on each node to be used by the containers, so far everything is working:

      dhcp4: false
      dhcp6: false
      dhcp4: true
      interfaces: [enp0s31f6]
  version: 2

cluster is up and running, I can create containers with internal IPs everything works fine.
But I created the following profile to get containers to have 1 internal IP and 1 IP on my LAN :

config: |
    version: 1
      - type: physical
        name: eth0
          - type: dhcp
            ipv4: true
      - type: physical
        name: eth1
          - type: dhcp
            ipv4: true
description: LXD profile with private and public networks
    name: eth0
    network: lxdfan0
    type: nic
    name: eth1
    nictype: bridged
    parent: br0
    type: nic
    path: /
    pool: local
    type: disk
name: lan01
- /1.0/instances/ubuntu01

the problem is whenever I try to boot up the container, it gets an IP fine, but the host on which it lives becomes completely unreachable. can’t ping, can’t ssh, other cluster node can’t see it.

I’m kind of out of ideas…

So, after further investigation, it appears as the host is getting a new IP (???) and the container never start (lxc start ubuntu01 hangs).

You have two NICs inside your instances which both have DHCP enabled. This means that its likely they will both receive a DHCP offer containing a different default gateway and then the system will fight (with unreliable consequences) to add a single default gateway to your system.

I found out It was a bridge behaviour problem (taking lowest mac address).
I added macaddress: 00:1a:3e:XX:XX:XX in the bridge config in netplan and that seem to have fixed my issue (my first one at least). having dual DHCP in the container doesn’t seem to cause any problem so far but I’ll keep it in mind ! thank you.