LXD networking on a host with dual LANs

We are evaluating LXC/LXD as a replacement for OpenVZ. Currently we have containers with a public IP sitting on a host node with two LANs. The host nodes run RIP.

In OpenVZ, the public IP of the container is given a kernel route that we announce via RIP over both of the LANS (for automated fail-over). If one LAN goes down, traffic is routed to the container over the other LAN via RIP.

I’m wondering how we would do something similar in LXC/LXD. Right now we are using a bridge from one of the host NICs into the container, and that works well. But we have no redundancy in case that LAN becomes unavailable.

Any suggestions?

Would you just want to setup a bridge in LXD that doesn’t have any of the physical devices connected.
You’d then route your IPs to that bridge, effectively the same way things worked with OpenVZ.

Interesting … I did not think of that. And I could add the routes to the bridge in the container startup scripts, right?
Thank you!

Let me dig into that. I will report back if it works and how-to.

sean

My thinking here is the bridge interface would need an IP address. Can I just use the lxbr0? We’re not using it for anything else?

OK, I think I have it figured out. But now for one final problem:

How to add a static route on the host to the container.

So after the container starts:
/sbin/ip add -host IP lxdbr0
After container stops
/sbin/ip del -host IP lxdbr0

I want this to be in the container config file so that if I migrate the container to another host, the config/routes will go with it.

Adding /usr/lib/lxc/container/config doesn’t work.
lxc config set container, adding:
lxc.hook.start: “/var/lib/lxc/bos-java03-cl01/startup.sh”
lxc.hook.stop: “/var/lib/lxc/bos-java03-cl01/shutdown.sh”
results in lxc.hook.start bad key.

Any ideas?

I am back to this. I thought that I had developed a solution to this but it seems to not be as flexible as I had hoped.
To recap, our nodes have two LANs, say LAN-1 and LAN-2. The containers are set up with their own IP addresses that we can say are in LAN-3.
The needs are:

  1. We want to be able to assign a static IP in LAN-3 to the containers, and be able to reach them from either LAN-1 or LAN-2.
  2. We want to be able to easily migrate a container from HOST A to HOST B and preserve routability.

Based on the thread above, we’ve set up every machine with a bridge that is not connected to either network called lxdbr0. It has a static IP address of 192.168.122.1 on all nodes. Inside the container, each container has its own network set up, own IP address, and it has a default route to 192.168.122.1.

The challenge is building a route on the host node to the container.

We’ve tried two techniques:

  1. lxc network set lxdbr0 ipv4.routes CONTAINER-IP
    – This sets up the route properly, but as new containers are added and others moved or deleted, we need to rebuild the entire route list. Ie., if I have container 1,2,3 on the host, and add container4, I need to run:
    lxc network set lxdbr0 ivp4.routes “1,2,3,4”
    If I move 3 to another host, I need to then run:
    lxc network set lxdbr0 ipv4.routes “1,2,4”
    Then I need to flush the routing table to get rid of the route to 3.
  2. a simple host route to the bridge:
    ip route add 3 dev lxdbr0
    The drawback to this is that the route needs to be added and removed from both ends each time we move a container.
    We’re looking for the ideal–or recommended–way to be able to build and tear-down these container-specific routes when the containers are started, stopped or moved.

Any ideas?

Hi Sean,

I think if I’m following what you are trying to do is have ip mobility for containers that migrate from one host to another when they keep the same static address?

If so then I have achieved this in a proof of concept using routing protocols (OSPF in this case) and accessing the containers by a loopback address.

Basically you give the containers a /32 address on a loopback and advertise this loopback into your network so that the core learns about the /32, you will have to run a routing daemon on the container. When the container moves/migrates to another host, it will spin up and advertise its loopback from the new location and the core will learn about the /32 to go via the new host. This may end up with local clutter in the routing table of the upstream router/switch as there will be loads of /32’s scattered about in there but you can summarize it at this location. and advertise only a summary/aggregate down to the whatever lies further up the routing chain.

The other thing you could look at if you want to keep the same IP and have L2 mobility is EVPN where basically the containers are in the same L2 domain spread over a routed underlay, it uses BGP for the mac learning / control plane.