Network unreachable in LXD container configured to use OVH failover IP (with netplan)

Hello all,

I have an OVH VPS running Ubuntu 20.04. I tried to configure my LXD containers to use a public IPV4 failover /32. Unfortunately, I only managed to make it work on LXD containers running Ubuntu 16.04 (using ifupdown for network configuration) but NOT on LXD containers running Ubuntu 20.04 (using netplan).

A bridge device br0 is configured on my VPS with public IP HOST_IP. I have two additional IPs provided by OVH: OVH_FAILOVER_IP1 and OVH_FAILOVER_IP2.
I have configured two LXD profiles, and both containers c1 and c2 use both of them:

  1. “default”:
    config: {}
    description: Default LXD profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
    root:
    path: /
    pool: default
    type: disk
    name: default
    used_by:

    • /1.0/instances/c1
    • /1.0/instances/c2
  2. “extbridge”:
    config: {}
    description: Lets containers use public network interface
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
    name: extbridge
    used_by:

    • /1.0/instances/c1
    • /1.0/instances/c2

This is the output of “lxc network list”:

 +--------+----------+---------+-------------+---------+
|  NAME  |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+--------+----------+---------+-------------+---------+
| br0    | bridge   | NO      |             | 3       |
+--------+----------+---------+-------------+---------+
| ens3   | physical | NO      |             | 0       |
+--------+----------+---------+-------------+---------+
| lxcbr0 | bridge   | NO      |             | 0       |
+--------+----------+---------+-------------+---------+
| lxdbr0 | bridge   | YES     |             | 1       |
+--------+----------+---------+-------------+---------+

In container c1, I have the following configuration (inspired by this blog: https://thomas-leister.de/en/lxd-use-public-interface/):

  • in /etc/network/interfaces:

      # This file describes the network interfaces available on your system
      # and how to activate them. For more information, see interfaces(5).
    
      # The loopback network interface
      auto lo
      iface lo inet loopback
    
      auto eth0
      # iface eth0 inet dhcp
      iface eth0 inet static
      		address OVH_FAILOVER_IP1/32
      		gateway GATEWAY_IP
      		dns-nameservers DNS_IP
    
      source /etc/network/interfaces.d/*.cfg
      
      # NOTE: directory /etc/network/interfaces.d/ is empty
    
  • in /etc/resolv.conf:

      # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
      #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
      nameserver DNS_IP
    

In container c2, I have the following configuration:

  • in /etc/netplan/10-lxc.yaml:

       network:
        version: 2
        renderer: networkd
        ethernets:
      	eth0:
      	  dhcp4: no
      	  dhcp6: no
      	  addresses:
      			  - OVH_FAILOVER_IP2/32
      	  gateway4: GATEWAY_IP
      	  nameservers:
      			  addresses:
      					  - DNS_IP
    
  • in /etc/resolv.conf:

      # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
      #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
      # 127.0.0.53 is the systemd-resolved stub resolver.
      # run "systemd-resolve --status" to see details about the actual nameservers.
      nameserver DNS_IP
      nameserver 127.0.0.53
    

The output of “lxc list” is:

+--------------+---------+------------------------+------+-----------+-----------+
|     NAME     |  STATE  |        IPV4            | IPV6 |   TYPE    | SNAPSHOTS |
+--------------+---------+------------------------+------+-----------+-----------+
|      c1      | RUNNING | OVH_FAILOVER_IP1 (eth0)|      | CONTAINER | 0         |
+--------------+---------+---------------------+------+-----------+--------------+
|      c2      | RUNNING | OVH_FAILOVER_IP2 (eth0)|      | CONTAINER | 0         |
+--------------+---------+---------------------+------+-----------+--------------+

Container c1 works correctly: from within the container I can ping the internet (e.g., “ping -c 4 www.ubuntu.com”) and I can also netcat the container from the host and from outside (e.g., “netcat -l 80” on the container and “netcat OVH_FAILOVER_IP1 80” on the host or outside).

Container c2 does not work: from within the container I cannot ping the internet (“Network unreachable”) and the container is not accessible from neither the host nor outside.

I am a bit confused: it seems to me that the two configurations are the same (except one uses ifupdown and the other uses netplan, but the result should be the same?).

One thing I have noticed is that in container c1, the command “ip addr” gives something like:

eth0@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
	inet OVH_FAILOVER_IP1/32 brd OVH_FAILOVER_IP1 scope global eth0
	...

while in container c2 it gives:

eth0@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 54.38.226.92/32 scope global eth0
    ...

I am not sure if broadcasting is relevant here…

Another difference is the output of “networkctl”:

  • in c1:

      WARNING: systemd-networkd is not running, output will be incomplete.
      IDX LINK             TYPE               OPERATIONAL SETUP
        1 lo               loopback           n/a         unmanaged
       44 eth0             ether              n/a         unmanaged
    
  • in c2:

      IDX LINK TYPE     OPERATIONAL SETUP
        1 lo   loopback carrier     unmanaged
       46 eth0 ether    routable    failed
    

Do you have any clue of what may cause the difference in behaviour?

Thanks a lot

Aside from the specifics of the config you have, it is my understanding that OVH enforce that all external IPs use the same MAC address, or at least are associated to a single static MAC address. Is this the case in your situation?

Please can you confirm what MAC address restrictions your provider enforces, as this will impact on the suggested solution. Thanks

Thanks a lot for your quick reply!

I think you are right: when I configured the bridge br0 on the host I had to specify the mac address (as explained in the OVH documentation) otherwise it would not work. OVH allows to associate virtual MAC addresses to failover IPs but only for dedicated servers, not virtual servers like my VPS. So it seems that all external IPs should indeed use the same MAC address (apologies for this rather vague answer, I am kind of a noob, I am going to ask the question on an OVH forum to be 100% sure since I was not able to find an explicit answer in the documentation). If it is confirmed, does this imply that I have to explicitly specify the MAC in the network configuration of LXD containers ? What surprised me is that the configuration in the Ubuntu 16.04 container worked straight away but not in the Ubuntu 20.04 container.

Yes it is as I thought, in that case you cannot connect your containers to the br0 bridge because that is a layer 2 device and each container NIC will then present its own MAC address onto the external network, which will then be filtered by the upstream ISP.

Instead take a look at using the routed NIC type as this is specifically designed to allow static external IPs to be routed into a container while still using the host’s MAC address for external connectivity.

1 Like

I’m surprised one of your containers is working connected to br0, as really it should be being filtered on the external network due to it using another MAC. Is it possible you were using a different setup on your old host or that some MACs are allowed?

Thanks for the link!

I have not changed the setup on the host and I also tried 3 times to delete and recreate container c1 following exactly the same procedure, and every time it worked (I also tested to restart the container and the host and it is stable: working each time). But container c2 just won’t work. So I guess the second option is more likely (some MAC are allowed). I need to clarify this with OVH I think. But isn’t it weird that when I run “lxc list” the correct IP address appears in the IPV4 column (even for container c2)? I would expect that no address is displayed.

No that is not weird, that is just LXD reading the active IP inside the container (that your netplan config has added). It doesn’t mean it won’t get filtered by the external network.

Ok thanks for the clarification. I will try the routed NIC type to see if it works and in the meanwhile if I receive more information from OVH I will add it to this post.

I tried using the routed nic type (as explained in the tutorial) on both containers and it works very well! I still do not know exactly which constraints OVH enforces on MAC addresses unfortunately.

Thanks a lot!

1 Like

I received a reply from OVH and it seems that they enforce MAC addresses to be the same on a given VPS. I don’t really know why my initial configuration was partly working…

Perhaps you were NATting outbound traffic to the host’s IP, similar to this: