Networking between two LXD servers

How should I configure networking for two LXD servers?

For example there is a Container_X on LXD_A server and there ise Container_Y on LXD_B server.
How can Container_X and Container_Y can talk each other? What’s the best networking for this?
And how about DNS? Can container_x.lxd and container_y.lxd can resolve each other?

Thank you so much.

There are a lot of potential answers to this. It also depends on your LXD version.

Most production environments will usually have a seperate network infrastructure which includes DHCP and DNS, they will then usually give you a VLAN for your containers, you’d then configure both hosts to use that VLAN and bridge containers to it.

But for smaller environments or when you don’t want to deal with VLANs and switches, recent LXD lets you use VXLAN for cross-host networking.

In this case, one of the host effectively acts as the router for your containers and handles DHCP and DNS for them. Other hosts connect to that network using VXLAN and only bridge the containers to the network, letting the main host do the rest.

I describe such a setup in the tunnel section of https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/

But the host acting as router,dhcp,dns will be a SPOF then.
Could be neat if dns records, dhcp leases and router functions would be in HA mode…so syncing the leases, dns to all (or minimal 2) the other LXD hosts connected to the same VXLAN/GRE tunnel. And all the LXD hosts would get a virtual IP for redundancy for all those functions.

Main question: would that be possibe/wanted (maybe it’s just me thinking out loud :wink: )?

Indeed, our current implementation causes a SPOF.
We use dnsmasq which doesn’t support any kind of lease and DNS replication. People who have such needs for a HA setup will usually want to run their own infrastructure for that.

For example, you could configure the same network on both hosts using LXD and the VXLAN tunnel option. Set both of them with a different IP, say x.x.x.1 for the first host and x.x.x.2 for the second but not enable DHCP or DNS on either of them.

They’re then both able to act as routers and you can now deploy your HA DHCP and DNS services as containers, one of each on each host, having the DHCP announce both gateways. It’s not quite as good as having a VIP for the gateway, but should be somewhat close.

Obviously the even more HA method would be to have an external HA router connected to the VXLAN you’re using for your containers. Modern L3 switches do tend to support VXLAN natively and so can do that for you. You’d obviously need an HA chassis for that.

True…I wasn’t thinking big enough :wink:
In a decent organisation you would have a redundant LAN setup and seperate DNS, DHCP servers anyway indeed.

Thank you so much for detail answers. I do understand that DHCP & DNS site is quite complex for HA depolyments. But I think this side has a bright future:

Maybe I need the update my question: Since IP address for LXD guests are perminent, DNS & DHCP would be a second issue. For example I need to deploy a MySQL Galera Cluster with 3 nodes as LXD guests. Each guest will be at different LXD host (at Scaleway). Like this:

lxd-host1: mysql-galera-cluster-1 (10.99.1.10/24)
lxd-host2: mysql-galera-cluster-2 (10.99.2.20/24)
lxd-host3: mysql-galera-cluster-3 (10.99.3.30/24)

How can I make these 3 LXD guests talk each other?

Thank you so much.

Assuming your 3 nodes don’t have dedicated connections between each other, I’d assume that you’ll be going over the internet to communicate between them. At which point your best bet is to setup a VPN between the various nodes.

So each node would have a VPN connection (IPSEC or OpenVPN) to the other two and have the remote host’s subnet routed over the VPN. So when mysql-galera-cluster-1 wants to talk to mysql-galera-cluster-2 it will go to the host, the host will find the route to 10.99.2.20 to be over a VPN tun device, then route the traffic there which will arrive on the remote host and be routed to the container.

VXLAN and GRE tunneling in LXD are nice for a lab or other similar dedicated environment, but I wouldn’t use that over the internet or untrusted provider network since neither have built-in encryption.

Just playing a bit with internetworking between LXD hosts, and besides encryption, is there a difference/preference between/for using the LXD (version 2.16) GRE tunnels or openvswitch GRE tunnels? Both seems to work just fine and both seems to work as 1 virtual switch/VLAN across the 2 LXD hosts

My set-up in short:

  • 2 LXD hosts separated by a router, so on different subnets
  • Both LXD hosts have the same bridge name/subnet defined with a different IP address on it (so not really the client server set-up you described on your blog)
  • Containers on LXD host 1 gets the default route to host1 and containers on LXD host 2 are pointing to host 2. So in case LXD1 host is unreachable, the containers on LXD2 still have their gateway.

I am doing encryption now via OpenVPN, but might change that to IPSEC later.

edit: one side effect though when you have ip addresses on both LXD hosts: don’t launch containers to quickly on both servers or when one host is not reachable, cause you might end up with duplicate IP addresses