LXD 2 bridges comunication

Hi, what would be the way, to make containers on different lxd bridges communicate each-other?

1 Like

Assuming you have the ipv4.routing property enabled (which is the default if not set to false) on the managed LXD networks then you should be able to communicate fine between networks and to the host.

Have you used tcpdump or equivalent to do some diagnostics as to where the packets are going missing?

Just to make myself clear, what i mean is for a container on 16.15.14.1/24 to communicate with another container on 19.18.17.1/24 without having both containers connected to both bridges.

Yes I understand your meaning.

Here’s an example:

lxdbr0:

 lxc network show lxdbr0
config:
  ipv4.address: 10.238.31.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:be3f:a937:9505::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge

Create lxdbr1:

lxc network create lxdbr1
lxc network show lxdbr1
config:
  ipv4.address: 10.0.171.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:fcce:9b83:b04b::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr1
type: bridge

Create container connected to lxdbr0

lxc launch ubuntu:18.04 c1

Create container connected to lxdbr1

lxc init ubuntu:18.04 c2
lxc config device override c2 eth0 network=lxdbr1
lxc start c2

Check IPs:

lxc ls
+------+---------+------------------------+-------------------------------------------------+-----------------+-----------+
| NAME |  STATE  |          IPV4          |                      IPV6                       |      TYPE       | SNAPSHOTS |
+------+---------+------------------------+-------------------------------------------------+-----------------+-----------+
| c1   | RUNNING | 10.238.31.247 (eth0)   | fd42:be3f:a937:9505:216:3eff:fe1e:9c6a (eth0)   | CONTAINER       | 0         |
+------+---------+------------------------+-------------------------------------------------+-----------------+-----------+
| c2   | RUNNING | 10.0.171.173 (eth0)    | fd42:fcce:9b83:b04b:216:3eff:fefd:7877 (eth0)   | CONTAINER       | 0         |
+------+---------+------------------------+-------------------------------------------------+-----------------+-----------+

Check ping:

lxc exec c1 -- ping  10.0.171.173 -c 5
PING 10.0.171.173 (10.0.171.173) 56(84) bytes of data.
64 bytes from 10.0.171.173: icmp_seq=1 ttl=63 time=0.069 ms
64 bytes from 10.0.171.173: icmp_seq=2 ttl=63 time=0.128 ms
64 bytes from 10.0.171.173: icmp_seq=3 ttl=63 time=0.128 ms
64 bytes from 10.0.171.173: icmp_seq=4 ttl=63 time=0.121 ms
64 bytes from 10.0.171.173: icmp_seq=5 ttl=63 time=0.095 ms

--- 10.0.171.173 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4073ms
rtt min/avg/max/mdev = 0.069/0.108/0.128/0.024 ms
lxc exec c2 -- ping 10.238.31.247 -c 5
PING 10.238.31.247 (10.238.31.247) 56(84) bytes of data.
64 bytes from 10.238.31.247: icmp_seq=1 ttl=63 time=0.074 ms
64 bytes from 10.238.31.247: icmp_seq=2 ttl=63 time=0.123 ms
64 bytes from 10.238.31.247: icmp_seq=3 ttl=63 time=0.126 ms
64 bytes from 10.238.31.247: icmp_seq=4 ttl=63 time=0.127 ms
64 bytes from 10.238.31.247: icmp_seq=5 ttl=63 time=0.117 ms

--- 10.238.31.247 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4102ms
rtt min/avg/max/mdev = 0.074/0.113/0.127/0.022 ms

One thing to remember though is that the source address of the packets will appear to be coming from the ipv4.nat.address of the outgoing bridge. Stateful tracking in iptables should ensure the return packets get back to the container though. However if you want to have the original source address then you will need to disable NAT on the bridges and put in some specific iptables rules to only apply NAT to external interface.

Thank you.
I do have different source for nat of different bridges. i think for now i will just connect to both bridges to solve the issue without iptables.
EDIT: This is actually an issue now, with 2 eth each has different nat static ip the container would always pick the host.

Can a tunnel be used here? if so. i highly appreciate an example.

Please can you explain further about what you are trying to achieve?

Sure. lxdbr1 use different ipv4.nat.address same goes for lxdbr2 …etc.
c1 -> lxdbr , c2 -> lxdbr2 , c3 -> lxdbr3 .
i want these containers to be able communicate each-other internally.

OK.

So you should be able to communicate between containers on different bridges, even with ipv4.nat.address defined. Its just that the source address of the packets will appear to come from the address of the bridge. You should still be able to target individual container addresses though.

If, however, you do want to be able to see the individual container source addresses (perhaps because you want to restrict access to services to particular source addresses) then you would need to disable NAT on the LXD bridge (set ipv4.nat to false) and then add manual iptables NAT rules that only apply your external NAT rules to packets leaving the host on the external interface.

You’re right, dummy mistake here. i used to configure static ips for instances manually, and forgot to add dns, just now i realized am better of not doing so, but increasing ipv4.dhcp.expiry is better solution.
Thank you.

Hi @tomp, i am revisiting this for another project, may i learn what is the difference between ipv4.nat: "true" and ipv4.nat: "false" please?

This indicates to LXD whether it should an outbound firewall rule to the LXD host machine to NAT outbound traffic from the managed bridged to the host’s IP.

So, i have run and nginx container proxy with ipv4.nat: "false" and i the port works. also with ipv4.nat: "true" works. thus. what difference should i expect?

I’m not sure what you mean, but it sounds like you asking why does your nginx port forward working with the ipv4.nat set to true or false. This is because the that setting influences the outbound connections, and port forwarding is related to inbound connections.

Hi, @tomp, an old thread but have a question please.
suppose lxdbr1 has an ipv4.nat.address: 192.168.1.65
and lxdbr2 has an ipv4.nat.address: 192.168.1.89
and i wish to have your setup in the accepted solution plus ipv4.nat.address:
now if am having a mysql runing on a container on lxdbr1 and an app make connection from lxdbr2 i get connection refused for 192.168.1.89i wish for the connection to be coming from the containeripv4.address:10.238.31.1/24not the nat ip. i know adding the rule onmysql` to allow connections from the nat ip will work, but i need it to be from the container assigned ip.
is there a way to do this maintaining same setup please?

So in this case packets are leaving lxdbr1 destined for lxdbr2, and during postrouting phase iptables is performing SNAT to the specified IP address for the source network.

What you’re asking is that SNAT be only applied if the packets are leaving a network interface that is external to the LXD host and is not one of the other internal bridge interfaces.

Currently you cannot specify this rule with LXD network config, by default it creates an SNAT or MASQURADE rule that will apply NAT to irrespective of the exiting interface.

However you can disable LXD’s NAT rules by setting ipv4.nat=false on your source bridge network.

Then you can add the required specific SNAT iptables/nftables rules manually:

e.g. my external network interface is enp3s0:

lxc network show lxdbr0
config:
  ipv4.address: 10.238.31.1/24
  ipv4.nat: "false"
  ipv6.address: fd42:be3f:a937:9505::1/64
  ipv6.dhcp.stateful: "false"
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/c1?project=test
managed: true
status: Created
locations:
- none

Add manual rule:

sudo iptables -t nat -A POSTROUTING -s 10.238.31.0/24 ! -d 10.238.31.0/24 -o enp3s0 -m comment --comment "manual SNAT rule for LXD network lxdbr0" -j SNAT --to-source 192.168.1.201

Now only traffic from lxdbr0 leaving via enp3s0 will be SNATted.

And you then need to automate that rule being added on boot, such as using a separate systemd unit.

@tomp Thanks for your userfull input.
I have no doubt what you explained should work.
Will this be managed with lxd at some point, i can keep using the current setup for now, because i wish to keep it simple to and minimal confs.

There are no plans at the moment.

You could open an issue over at https://github.com/lxc/lxd/issues to log this as a request.

Potentially it could be specified using an ipv4.nat.interfaces setting or similar, e.g. ipv4.nat.interfaces=enp3s0