Problems creating bonded bridge

The container is running iptables:

sudo iptables-save

Generated by iptables-save v1.6.1 on Mon Aug 23 13:46:43 2021

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [4109:896650]
-A INPUT -s 127.0.0.1/32 -j ACCEPT
-A INPUT -s 146.87.119.32/32 -j ACCEPT
-A INPUT -s 146.87.119.33/32 -j ACCEPT
-A INPUT -s 195.166.158.247/32 -j ACCEPT
-A INPUT -s 146.87.119.37/32 -p tcp -m tcp --dport 10050 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 389 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -i eth0 -m state --state INVALID,NEW -j DROP
-A OUTPUT -s 146.87.119.37/32 -p tcp -m tcp --dport 10051 -j ACCEPT
COMMIT

Completed on Mon Aug 23 13:46:43 2021

Generated by iptables-save v1.6.1 on Mon Aug 23 13:46:43 2021

*nat
:PREROUTING ACCEPT [1656:90374]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [347:26152]
:POSTROUTING ACCEPT [347:26152]
COMMIT

Completed on Mon Aug 23 13:46:43 2021

Generated by iptables-save v1.6.1 on Mon Aug 23 13:46:43 2021

*mangle
:PREROUTING ACCEPT [3575:624791]
:INPUT ACCEPT [1944:534146]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [4109:896650]
:POSTROUTING ACCEPT [4115:898012]
COMMIT

Completed on Mon Aug 23 13:46:43 2021

Right well I would disable that firewall inside the container until you have it working.

Then you need to start looking at using tcpdump on the host listening to br0 interface and inside the containing listening on the eth0 interface and check if the packets are A) leaving the container’s interface and B) making it to the bridge.

I should note that this container was/is running fine with those exact iptables settings enabled but within a proxmox container so Ican safely say this is a LXD server (network profile) config error and disabling iptables will make no difference.

I’ve not been down the tcpdump route yet no. Do my LXD profile config settings above look correct?

I’m a bit unsure about the lxc-net service because I’ve successfully create a bridge with netplan and used it with LXD on my laptop before and I didn’t have to configure or run lxc-net.

Is it needed or only in certain cases? There is no DHCP server on my LXD servers LAN but that was also the case when I created a bridge on my laptop.

lxc-net is only used by liblxc, its nothing to do with LXD and is not needed.

Try sudo tcpdump -i <bridge interface> -nn and then in a separate window run a ping from container to host.

I’m pretty sure the problem is due to subnetting so I need to change the address of the bridge to match the address of the container. I created a test container where the first three octets of its IP were the same as the bridge address and it could access the net using my existing netplan bridge.

This got me thinking, would I be better off using macvlan? What are the advantages of using a bridge over macvlan? Can macvlan be used with bonds? Can macvlan profiles use different subnetting to that of the main connection?

I could do with some tips on creating a second bridge. I have had one bridge and one bond working with only one of each defined, but no containers attached to my second bridge/bond can access the net when I expand that config to 2 bonds and 2 bridges.

I have not been able to find any examples of creating a second bridge under netplan. I presume the second bridge doesn’t require defining a gateway or DNS - thats how it worked with ifupdown:

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      dhcp6: no
    eno2:
      dhcp4: no
      dhcp6: no
    eno3:
      dhcp4: no
      dhcp6: no
    eno4:
      dhcp4: no
      dhcp6: no
  bonds:
    bond0:
      interfaces:
      - eno1
      - eno2
      parameters:
        lacp-rate: fast
        mode: active-backup
        transmit-hash-policy: layer2+3
    bond1:
      interfaces:
      - eno3
      - eno4
      parameters:
        lacp-rate: fast
        mode: active-backup
        transmit-hash-policy: layer2+3
  bridges:
    br0:
      interfaces: [bond0]
      dhcp4: no
      dhcp6: no
      addresses:
        - 146.87.15.153/21
      gateway4: 146.87.15.1
      nameservers:
        addresses:
          - 146.87.174.121
          - 146.87.174.122
    br1:
      interfaces: [bond1]
      dhcp4: no
      dhcp6: no
      addresses:
        - 146.87.119.19/21

I created a second lxd profile that uses br1 and assigned a couple of containers to it but I failed to get internet access working with them. I tried using both bridge addresses for the gateway values but neither worked.

ip a on LXD host:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 06:33:3b:5a:c1:07 brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 06:33:3b:5a:c1:07 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
7: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether 06:33:3b:5a:c1:07 brd ff:ff:ff:ff:ff:ff
8: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
    inet 146.87.15.153/21 brd 146.87.15.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::3801:50ff:fea7:e61d/64 scope link 
       valid_lft forever preferred_lft forever
9: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:33:3b:5a:c1:07 brd ff:ff:ff:ff:ff:ff
    inet 146.87.119.19/21 brd 146.87.119.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::433:3bff:fe5a:c107/64 scope link 
       valid_lft forever preferred_lft forever
10: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 18:66:da:af:c3:b0 brd ff:ff:ff:ff:ff:ff
48: vethecd5bc8d@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether 92:d7:11:53:14:81 brd ff:ff:ff:ff:ff:ff link-netnsid 0
50: veth7cdc71f1@if49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether a6:a8:25:0c:22:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
$ ip r
default via 146.87.15.1 dev br0 proto static 
146.87.8.0/21 dev br0 proto kernel scope link src 146.87.15.153 
146.87.112.0/21 dev br1 proto kernel scope link src 146.87.119.19
$ lxc profile show br1
config: {}
description: Default LXD profile
devices:
  br1:
    nictype: bridged
    parent: br1
    type: nic
  eth0:
    name: eth0
    nictype: bridged
    parent: br1
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: br1
used_by:
- /1.0/instances/hermes
- /1.0/instances/ubuntuone