Cannot access container on new managed bridge network - which host interface is it using?

Hi, I have initially configured bridge br0 manually and use unmanaged network for LXD containers. Now I want to add additional network with DHCP and DNS on the same IP space so that I could create containers and IP addresses assigned by LXD.

Short info. Network 192.168.1.0/24, host is 192.168.1.200/24 on br0, it uses host interface eno1. Network for host and containers works fine.

New managed network 192.168.1.201/24 (192.168.1.201 is lxdbr1 interface on the host).

lxc network create lxdbr1
lxc network list
lxc network set lxdbr1 ipv4.address 192.168.1.201/24
lxc network set lxdbr1 ipv4.dhcp.ranges 192.168.1.101-192.168.1.199
lxc network set lxdbr1 dns.domain sitex.example.com
lxc network set lxdbr1 ipv4.dhcp.gateway 192.168.1.1 # another server on the network
lxc network set lxdbr1 ipv6.address none

My new profile config, I tried both but no luck:

config: {}
description: ""
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr1
    type: nic
  root:
    path: /
    pool: local
    type: disk

And with the recommendation from here About networking - LXD documentation :

config: {}
description: ""
devices:
  eth0:
    name: eth0
    network: lxdbr1
    type: nic
  root:
    path: /
    pool: local
    type: disk

Container is created and IP gets assigned from DHCP pool range
mycont4 | RUNNING | 192.168.1.125 (eth0)

I cannot ping or access it from host, I can lxc exec into container but cannot access or ping anything.
I tried setting ipv4.nat from true to false, restarted container but no luck.

no firewall rules configured.

Update:
When I setup br0 I configured it to use eno1 as parent. How lxdbr1 works, which host interface it is using?

The problem is that 192.168.1.201/24 is in the same subnet as 192.168.1.0/24.
This is going to result in both your br0 and lxdbr1 interfaces creating duplicate routes to 192.168.1.0/24 on your system, which will wreak havoc with IP communication on those networks.

You will need to use a different subnet for lxdbr.

Thanks. Does that mean I better have my host connected to two different subnets?

For example, eno2 interface connected to management subnet 192.168.51.0/24
and eno1 connected to my dev/prod/etc network 192.168.1.0/24 where lxdbr will be configured?

Can I not just skip management subnet and configure lxdbr on a physical interface?

And that is another thing I could not figure out, how does lxdbr0 (which is a managed network created by lxd init) knows which interface to use as parent?

I just simply want to configure an LXD host for dev environment that runs in existing subnet (network), and so that LXD can create IP addresses for me out of its internal DHCP. Ideally I want DNS zone too, but that is different topic.

Otherwise I have two options here:

  1. reconfigure my DHCP server to assign IP addresses to container. This will not work because I have MAC filtering/allowlist for security reasons.
  2. change “development” subnet, that will require additional firewall rules and routing setup in various network devices including VPN.

I previously was able to configure LXD to use existing network/subnet, but that was not LXD managed network, so I had to think and assign random IP addresses myself.

Maybe I need to use this OVN network - LXD documentation
I will give it a try.

OVN is not available out of box in my Debian 11. Probably I can set it up somehow but it is looking less likely.

I thought to go with another option - setup a DHCP server in a container and serve IPs from that DHCP to my containers over eth1 interface. eth0 would be for my network. But debian 11 images from LXD servers do not have config file for eth1 in /etc/systemd/network/ … so eth1 does not get IP from LXD system. Even if I configure LXD managed network on eth0, then I may have same issue on eth1 when configuring IP with my own dhcpd server.

I could create my own debian image with all necessary interface config files present.

Am I misunderstanding this? Seems like a trivial task for a container system to assign IP address out of the box, especially it has functionality, but I cannot make it work, and other methods do not seem straightforward. I will probably end up configuring IP addresses manually.

You can’t really do this. If you create a managed bridge using lxc network create that uses the same IP subnet as the existing external network on br0 then this will make the external network unreachable from inside the containers (and equally the containers unreachable from the external network).

I think what you’re saying is that you want LXD to provide automatic IP configuration for containers, but keep them in the external subnet (connected to br0).

There is the bridge.external_interfaces setting (Linux Containers - LXD - Has been moved to Canonical) that may be of use here. In this case you would remove br0 bridge and then set lxc network set lxdbr0 bridge.external_interfaces=<external interface> for the interface currently connected to br0.

But you need to be careful doing this, and make sure there isn’t an existing DHCP/IPv6 RA server on the external subnet, otherwise LXD won’t know what IPs are in use on the external network and may end up inadvertently handing out addresses/routes that conflict with existing usage on the external network.

Thanks, I do not know how did I try this before but it did not work.

Thanks for the advice about DHCP, I will use the DHCP range on LXD that is not in use on my existing DHCP server.

I finally took your advice and here is what I did:

My host has four physical network interfaces.
eno2 is configured on a different separate network for management, where I can ssh and admin stuff.
eno1 does not have any IP configured. eno1 is connected to the network where I need IP addresses to be assigned automatically.
lxc network set lxdbr0 bridge.external_interfaces=eno1

It now works for me. I think I can disconnect eno2 but good to have a backup. Thanks for help.

1 Like