Problems creating bonded bridge

I am tracking the 4.0/stable/ubuntu-20.04 LXD snap. My LXD server, which is running on a bare metal install of 20.04 server has internet access but the LXD container doesn’t (I want it to) and I have been unable to ping my running container from the LXD host. I want my container to be accessible on the same LAN as its host via a bridge and I want to bond the 4 gig ethernet connections of the LXD server and use this for the LXD bridge.

I am aware of macvlan profiles but I would like to know how to get a bridge working because that would seem like a better solution in the long run as I add extra containers. I know lxd init can create a bridge for you but my understanding is thats no use if you want access your container from your LAN.

I’m not running iptables or ufw on the LXD server and I have enabled ipv4 forwarding. I have tried with both netplan and networkmanager and whilst I can get a bonded connection working, I don’t seem to be able to create a working bridge that uses a bond. I would prefer to use NM to do this if possible, I’m really not a fan of netplan’s yaml config files and it doesn’t have proper alternatives to nmtui and nmcli - same goes for networkd.

I have read the notes on bridges, systemd-networkd and DNS in the LXD docs on the networking page and tried creating a systemd unit but I’m not sure my bridge was correctly configured as when I ran brctl show it wasn’t showing a device (like bond0) in the interface column. Seemed like a red flag to me.

$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  br0:
    nictype: bridged
    parent: br0
    type: nic
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/hermes


    $ lxc config show hermes
    architecture: i686
    config:
      image.description: Ubuntu 18.04 LTS Intel 32bit
      image.os: Ubuntu
      image.release: bionic 18.04
      volatile.base_image: 51a1b0053632c41f0a7d8d5cb24050665170dbf3a98e995922183ab743a84314
      volatile.br0.hwaddr: 00:16:3e:67:e7:3f
      volatile.br0.name: eth1
      volatile.eth0.hwaddr: 00:16:3e:7f:35:db
      volatile.idmap.base: "0"
      volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.power: STOPPED
      volatile.uuid: 75a90036-c157-4706-9bfd-55fd39bdad82
    devices:
      eth0:
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""

Here’s my attempt at creating a bridge with netplan:

network:
  bridges:
    br0:
      addresses:
      - 146.87.15.153/24
      dhcp4: false
      gateway4: 146.87.15.1
      nameservers:
        addresses:
        - 146.87.174.121
        - 146.87.174.122
        search:
        - domainname
      interfaces:
      - bond0
  bonds:
    bond0:
      interfaces:
      - eno1
      - eno2
      - eno3
      - eno4
      parameters:
        lacp-rate: fast
        mode: active-backup
        transmit-hash-policy: layer2+3
  ethernets:
    eno1: {}
    eno2: {}
    eno3: {}
    eno4: {}
  version: 2

That creates a working bond but the bridge doesn’t seem to work.

I was trying stuff like this with nmcli:

nmcli connection add type bond con-name Bond0 ifname bond0 bond.options "mode=active-backup,miimon=100"
nmcli connection add type ethernet con-name Slave2 ifname eno2 master bond0 slave-type bond
nmcli connection add type ethernet con-name Slave3 ifname eno3 master bond0 slave-type bond
nmcli connection add type ethernet con-name Slave4 ifname eno4 master bond0 slave-type bond
nmcli connection add type bridge con-name Bridge0 ifname br0 ip4 146.87.15.153/24

Then I used nmtui to add the gateway and DNS for the bridge. I cannot ssh into the machine when I give the ip to the bridge, even though nmcli con show gives the green status to br0, but internet and ssh works fine if I create a bond with nmtui.

The container has a static IP configured with ifupdown, its gateway is set to the IP of br0 on the LXD host and there is no DHCP server on the LAN of the LXD host.

Please let me know what else you need to know to troubleshoot this or maybe I will just have to use macvlan?

Please show output of ip a and ip r on host?

Before I do that, please outline how you think I should create the bridge? Do you know the correct nmcli commands to create a bridge? Do the ones I used above look correct? They’re clearly not as I could not use that bridge at all.

I’m presuming networkmanager under 20.04 should be a viable option right? As I said, I would prefer using it to netplan.

I could recreate the bridge using bond with nm and then give you the ip output but when doing it that way, the bridge connection was highligted in yellow by nmcli con show which tells me it wasn’t configured correctly. That happened when I created a bond (with a static IP) then created a bridge using that bond.

Yes use netplan, if you host’s interface is working then the bond and bridge is working (assuming I can see from the output of ip a on the host that its setup correctly).

I don’t really know about NetworkManager so can’t personally recommend it.

I gave up up netplan because I failed to modify my config above so that I could adjust parameters of the bridge such as stp and foward delay. I have had no such problems with nm. Netplan apply seems to accept non working configs.

Because of this I am reluctant to return to netplan. There isn’t a proper alternative to nmtui or nmcli for netplan that can help me format netplan yaml files is there?

How would you adjust my example netplan config, if at all? The bridge should work without having to enable or disable stp or adjust forward delay I presume? If I wanted to change those options though, I cannot.

If I were doing this I’d try and break it down into smaller steps to identify where the fault is.

Such as:

  1. Create a netplan or NM bridge and connect a single Ethernet port to it - no bond.
  2. Create container and connect that to the bridge - check that internet works on both container and host.

If that works then introduce the bond, as the bond may be the cause of the issue or nothing to do with it.

Good plan Tom! Will get back to you when I’ve tested without the bond.

I have worked out what should be a complete and working set of nmcli commands to create a bridge and it all looks good now according to nmcli con show and brctl show but my container still fails to ping 1.1.1.1 etc. I am only using one ethernet connection, not a bond.

Here are the commands I used to create the bridge:

$ sudo nmcli con add ifname br0 type bridge con-name br0
$ sudo nmcli con add type bridge-slave ifname eno2 master br0
$ sudo nmcli connection modify br0 ipv4.addresses '146.87.15.153/24'
$ sudo nmcli connection modify br0 ipv4.gateway '146.87.15.1'
$ sudo nmcli connection modify br0 ipv4.dns '146.87.174.121 146.87.174.122'
$ sudo nmcli connection modify br0 ipv4.dns-search 'cs.salford.ac.uk'
$ sudo nmcli connection modify br0 ipv4.method manual
$ sudo nmcli con up br0

Bridge and connection. The br0 line is green, indicating it is active. Inactive connections are printed yellow by nmcli:

$ nmcli con show
NAME                   UUID                                  TYPE      DEVICE 
Ethernet connection 2  5869c90d-d03e-480e-a8f0-f30661b4424d  ethernet  eno2   
br0                    fd258ec9-a444-49ed-81ae-46129afd7747  bridge    br0    
bridge-slave-eno2      5cf28fc6-e28e-4070-b3f2-361868b34871  ethernet  --     
sgs548@dionysus:~$ brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.3aa149fe4224       yes             veth24369f0a
                                                        veth7d7552de

ip a:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:ab brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 18:66:da:af:c3:ac brd ff:ff:ff:ff:ff:ff
inet 146.87.15.153/32 scope global noprefixroute eno2
valid_lft forever preferred_lft forever
inet6 fe80::a696:4aed:834b:35a4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 18:66:da:af:c3:ad brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 18:66:da:af:c3:ae brd ff:ff:ff:ff:ff:ff
6: idrac: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 18:66:da:af:c3:b0 brd ff:ff:ff:ff:ff:ff
9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3a:a1:49:fe:42:24 brd ff:ff:ff:ff:ff:ff
inet 146.87.15.153/24 brd 146.87.15.255 scope global noprefixroute br0
valid_lft forever preferred_lft forever
11: veth7d7552de@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 3a:a1:49:fe:42:24 brd ff:ff:ff:ff:ff:ff link-netnsid 0
13: veth24369f0a@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether b6:94:c5:b7:f2:32 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Thanks Tom!

The IP on eno2 should not be there, try removing that.

I think that was worth trying because this guide

(yeah I know, Centos8)

says I should be OK to delete the ethernet connection with the static IP after creating and bringing up the bridge but if I disable or delete entirely the ethernet connection, I can no longer SSH into the server.

nm has 5 modes for eth connections:

Disable
Automatic
Link-local
Manual
Shared

Seems like I might have to speak to the nm devs to see if they’re aware of any issues with creating bridges under 20.04?

Yes setting up a bridge usually requires out-of-band access (or confidence in a script you’re running that it’ll complete successfully).

Either way that IP certainly cannot be on eno2.

I personally would use netplan for now until you’ve proved the issue isn’t the bridge, and then you can move onto the bond.

I have reverted to using netplan with the config shown in the first post, with a bond and a bridge. I have no internet access in my container.

$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.326f2e7efdd1 no bond0
veth1856cb2d
vethc014be0d

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
6: idrac: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 18:66:da:af:c3:b0 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 3a:01:50:a7:e6:1d brd ff:ff:ff:ff:ff:ff
9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 32:6f:2e:7e:fd:d1 brd ff:ff:ff:ff:ff:ff
inet 146.87.15.153/24 brd 146.87.15.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::e072:53ff:feb9:eca7/64 scope link
valid_lft forever preferred_lft forever
11: vethc014be0d@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 4e:e3:e7:9a:10:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
13: veth1856cb2d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 32:6f:2e:7e:fd:d1 brd ff:ff:ff:ff:ff:ff link-netnsid 0

“with a bond”? I thought we were going to try it without a bond first?

I thought it might’ve been of some use to see the non-working bond config output?

Busy Friday!

I have created a bridge with netplan without a bond yet it’s the same situation as with all my attempted configs - no connectivity to or from the container.

network:
  version: 2
  renderer: networkd
  ethernets:
    eno2:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: [eno2]
      dhcp4: no
      dhcp6: no
      addresses:
        - 146.87.15.153/24
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 146.87.15.1
      nameservers:
        addresses:
          - 146.87.174.121
          - 146.87.174.122

ip:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 18:66:da:af:c3:ab brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether 18:66:da:af:c3:ac brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 18:66:da:af:c3:ad brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 18:66:da:af:c3:ae brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 18:66:da:af:c3:ac brd ff:ff:ff:ff:ff:ff
    inet 146.87.15.153/24 brd 146.87.15.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::1a66:daff:feaf:c3ac/64 scope link 
       valid_lft forever preferred_lft forever
7: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 18:66:da:af:c3:b0 brd ff:ff:ff:ff:ff:ff
9: veth6685ffe6@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether ca:1e:e0:16:75:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
11: veth8ce33545@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether 6e:a9:fa:1d:40:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0

OK thanks.

Can you ping the host from the container?

Also please can you provide ip a and ip r from inside the container, and ip r on the host for completeness.

Also please provide output of sudo iptables-save and sudo nft list ruleset if that is available.

No, I cannot ping the LXC server from the container.

ip a in container:

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth1@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:67:e7:3f brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:7f:35:db brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 146.87.119.33/21 brd 146.87.119.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe7f:35db/64 scope link
valid_lft forever preferred_lft forever

ip r from container:

ip r

default via 146.87.15.153 dev eth0 onlink
146.87.112.0/21 dev eth0 proto kernel scope link src 146.87.119.33

ip a LXD:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:ab brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 18:66:da:af:c3:ac brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:ad brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:ae brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 18:66:da:af:c3:ac brd ff:ff:ff:ff:ff:ff
inet 146.87.15.153/24 brd 146.87.15.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::1a66:daff:feaf:c3ac/64 scope link
valid_lft forever preferred_lft forever
7: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:b0 brd ff:ff:ff:ff:ff:ff
9: veth6685ffe6@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether ca:1e:e0:16:75:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
11: veth8ce33545@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 6e:a9:fa:1d:40:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0

ip r LXD:

$ ip r
default via 146.87.15.1 dev br0 proto static onlink
146.87.15.0/24 dev br0 proto kernel scope link src 146.87.15.153

$ sudo iptables-save

Generated by iptables-save v1.8.4 on Mon Aug 23 10:32:19 2021

*raw
:PREROUTING ACCEPT [210387:43544189]
:OUTPUT ACCEPT [18668:1051264]
COMMIT

Completed on Mon Aug 23 10:32:19 2021

Generated by iptables-save v1.8.4 on Mon Aug 23 10:32:19 2021

*mangle
:PREROUTING ACCEPT [210387:43544189]
:INPUT ACCEPT [109405:34451473]
:FORWARD ACCEPT [86731:6722133]
:OUTPUT ACCEPT [18668:1051264]
:POSTROUTING ACCEPT [112863:8012245]
COMMIT

Completed on Mon Aug 23 10:32:19 2021

Generated by iptables-save v1.8.4 on Mon Aug 23 10:32:19 2021

*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT

Completed on Mon Aug 23 10:32:19 2021

Generated by iptables-save v1.8.4 on Mon Aug 23 10:32:19 2021

*filter
:INPUT ACCEPT [109405:34451473]
:FORWARD ACCEPT [86731:6722133]
:OUTPUT ACCEPT [18668:1051264]
COMMIT

Completed on Mon Aug 23 10:32:19 2021

I see the problem, and I’m confused by your network setup.

Why is the default gateway inside the container pointing at the LXD host (146.87.15.153)?
For internet access its going to need to be the same as the host, i.e. 146.87.15.1 - otherwise there’s little benefit in using a manual bridge.

But that will just prevent your container from reaching the internet, it should still allow access between container and host.

The reason this isn’t working is because your container and your host are on different subnets and are not reachable from one and other at the L2 layer.

The container is in the 146.87.112.0/21 and the host is in the 146.87.15.0/24 subnet.
This means your host can only reach IPs at the L2 from 146.87.15.1-146.87.15.254.
And the container’s IP of 146.87.119.33 is not in that range.

However the container can send packets to the host, as the 146.87.112.0/21 subnet includes the 146.87.15.0/24, but the host will not be able to return packets to the container and will likely be sending them to your upstream router as its the default gateway, and the upstream router will just drop them.

So remember that when using a bridge onto the external network, so need to treat the host and the container as if they were connected to an Ethernet switch (this is what br0) is, and so it operates at L2:

  1. Make sure that the container doesn’t use the LXD host’s IP for default gateway, it should be the same gateway as the LXD host uses in most cases.
  2. Ensure that both the container and the LXD host are in the same subnet so they can have bidirectional communication.

I have changed the gateway of the container to match the LXD host and switched them both to /21 but I still cannot ping the LXD server from the container or reach the internet from the container.

ip a LXD:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:ab brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 18:66:da:af:c3:ac brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:ad brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:ae brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 18:66:da:af:c3:ac brd ff:ff:ff:ff:ff:ff
inet 146.87.15.153/21 brd 146.87.15.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::e011:5ff:fe7c:5552/64 scope link
valid_lft forever preferred_lft forever
7: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 18:66:da:af:c3:b0 brd ff:ff:ff:ff:ff:ff
9: veth5b4a5605@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether be:48:08:aa:ee:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
11: vetha227d947@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 52:f1:c2:71:13:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0

ip r LXD

$ ip r
default via 146.87.15.1 dev br0 proto static onlink
146.87.8.0/21 dev br0 proto kernel scope link src 146.87.15.153

ip a container:

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth1@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:67:e7:3f brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:7f:35:db brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 146.87.119.33/21 brd 146.87.119.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe7f:35db/64 scope link
valid_lft forever preferred_lft forever

container

ip r

default via 146.87.15.1 dev eth0 onlink
146.87.112.0/21 dev eth0 proto kernel scope link src 146.87.119.33