Domain resolution in Debian 10 container with routed NIC

Yes it seems that debian’s version of cloud-init renderer is objecting to the fact that the IP address is already configured on the interface.

I’m not sure if there is something in cloud-init to instruct the render to remove IPs or ignore them if they are already present.

Does it work if you remove the IP inside the container after boot manually and then try starting the network service?

As for the NAT rule, the routed NIC doesn’t affect the host’s NAT rules, so if you need NAT then you need to add that (although that does beg the question the need for routed NIC type, as the primary purpose of that is to expose the container’s IP onto the network without NAT).

Does it work if you remove the IP inside the container after boot manually and then try starting the network service?

If I remove the ip, restart networking and restore routes manually, everything works like before, but networking is running correctly.

As for the NAT rule, the routed NIC doesn’t affect the host’s NAT rules, so if you need NAT then you need to add that (although that does beg the question the need for routed NIC type, as the primary purpose of that is to expose the container’s IP onto the network without NAT).

I’m pretty sure that when I used “Ubuntu” image in the container, I didn’t have to set up NAT on the host to be able to ping the internet from the container - that’s what I don’t understand really - why I need to do that on Debian 10? Shouldn’t routing set up by LXD take care of that? Do you have any idea what’s going on here?

We may be actually pretty close to the working Debian 10 configuration, which is great, but also a bit sad that there is no instruction, how to set up one of the most popular server distros as container, pretty much anywhere on the internet yet.

I realized one more important thing: Despite not being able to ping from my guest, I CAN ping other devices in my local network. So the packets actually are routed outside of my host. I’m now convinced that I’m missing something quite simple, but my networking knowledge is pretty limited yet. :confused:

Ah interesting, but lets back up for a moment and review what you are trying to achieve.

The default networking mode in LXD is to setup a virtual bridge (or switch) called lxdbr0 that provides a private DHCP and DNS server (via dnsmasq). Containers then use the bridged NIC device that is in the default profile and get allocated either a dynamic or static IP via DHCP. The lxdbr0 network also sets up NAT rules to allow the containers to get outbound external access by ‘hiding’ behind the host’s IP and MAC address.

This is usually enough to get up and running. But sometimes you might want the containers to actually ‘join’ the external network as if they were real machines on the network. In this case using NAT is not desirable as it masks all of the containers behind the hosts IP.

In this case there are a few options available to get the containers to join the external network without NAT.

  1. Convert your host’s external interface into a switch/bridge (e.g. br0) and move the host’s IP addressing to that interface and add the host’s external interface to the br0 bridge. Then have LXD containers also connect to that bridge, effectively joining them to the external network at layer 2. You can do this using lxc config device add <instance> eth0 nic nictype=bridged parent=br0. Your containers would then rely on DHCP and DNS services from the external network as if they were real machines, and each would have their own MAC address on the network.
  2. Converting your host’s external interface and addressing to a bridge can be complex, especially if being done on a remote machine, as it can involve losing network access. Instead another option is to use the macvlan NIC type with a parent of the external interface. This achieves a similar solution to the external bridge above, without setting up another bridge. However it comes with a restriction that the containers are not allowed to communicate with the host. This may or may not be a problem depending on your use case.
  3. Some external networks do not allow a single host to use more than 1 MAC address per port. In this case, using an external bridge or macvlan is not possible because each instance would have its own MAC address. In these cases we can use either routed or ipvlan NIC types. The latter having a similar restriction to macvlan that prevents communication with the host. But both of these types allow specific designated IPs to be advertised as being owned by the host’s MAC address onto the external network and then routed into instances without needing to use NAT.

So with that background, you can perhaps see why I am a little confused why you are enabling NAT on your LXD host, as the primary reason to use routed NICs is when you don’t want to use NAT.

The way that routed and ipvlan advertise their IPs onto the network is using proxy ARP/NDP (also called Neighbour Proxy), it effectively asks the host to claim ownership of the IPs onto the external network, which is why the instances using these NIC types need to have static IPs assigned, and why, as a convenience, LXD preconfigures the IPs on the interfaces (as DHCP won’t work).

The vast majority of users will use DHCP with bridged or macvlan networking.
Different distros use different networking configuration systems with their own subtle behaviors, and using cloud-init further introduces additional restrictions.

In this case it appears that Debian’s network setup doesn’t like it if IPs are preconfigured, and cloud-init (to my knowledge anyway) doesn’t offer a way to clear the IPs before running the network setup. However perhaps there is a pre-script which can be run to achieve this.

Another option would be to extend LXD with an option to not preconfigure the IPs on the interface when passing into the instance, and let it up the network config inside the instance.

One thing to point out though is that, as we’ve seen, unlike Ubuntu with netplan, Debian does not clear IPs on the interfaces before setting it up. This means that in actual fact we do not need cloud-init to configure networking, its already set up. All we need to do is set the DNS servers. So if cloud-init can be configured to just setup DNS servers then that should suffice.

So assuming that NAT isn’t actually desired, and is disabled, then if you can still ping other devices on the network and they can ping the container’s IP as well, then that means that both the proxy ARP and static routes are working.

If at that point you cannot ping, then the next step is to try pinging the network’s default gateway and make sure (using tcpdump) that the packets from the container are leaving the host’s external port.

Assuming you can resolve the other issue on your network (pinging, then a way to automate this via cloud-init working around Debian’s network setup restrictions is as follows:

Don’t add the address info in the cloud-init network config, just disable DHCP, and instead manually apply the desired nameservers to /etc/resolv.conf using the user-date:

lxc profile show routed
config: |
    version: 2
          dhcp4: false
          dhcp6: false
          - to:
            on-link: true
  user.user-data: |
      - rm -f /etc/resolv.conf
      - echo "nameserver" > /etc/resolv.conf
description: Default LXD profile
    name: eth0
    nictype: routed
    parent: enp3s0
    type: nic
name: routed
1 Like

@simos I’m not sure if you’re interested in updating but just letting you know how to get this working for Debian.

Thank you for this very detailed explanations, I really appreciate them. I don’t think that disabling dhcp is required for Debian, because it’s just not enabled by default. We can probably completely get rid of the “” part from the profile configuration.

So with that background, you can perhaps see why I am a little confused why you are enabling NAT on your LXD host, as the primary reason to use routed NICs is when you don’t want to use NAT.

Oh, I didn’t intend to enable NAT, it was just an observation that enabled NAT made outgoing connections available. I didn’t know if this information would be useful or not, as I’m still wrapping my head around how this setup works exactly.

It seems the only part that is missing is why my outgoing packets are not routed into my network’s gateway. What is really funny is that I can ping my network’s gateway ( from the container, but yet I cannot ping for some reason. It MIGHT be some misconfiguration on the part of my network, but I’m not sure of this yet. Any ideas/suggestions what it could be would be welcome, I’ll be diagnosing this, and if I will finally find the working setup, I’ll describe it with the working configuration here.

I’ve found that the LXD debian images have DHCP enabled by default, and while it cannot succeed, and will leave the preconfigured IPs and routes intact, it does delay the boot and prevent lxc shell <instance> working for a minute or so after starting the instance.

I’d look at using tcpdump when pinging to check which interface the packets are going out of, check the source address hasn’t been mangled by NAT, and that return packets are arriving.

Thanks, I updated the post with the LXD profile that is suitable for Debian.

1 Like

When I add one more route to the host: ip route add default via pointing into my gateway, then access to the internet is restored in the container.

These are routes on the host that are set up by default, after starting container:

default via dev eth0 proto dhcp src metric 100 dev eth0 proto kernel scope link src dev eth0 proto dhcp scope link src metric 100 dev eth1 proto kernel scope link src dev vethc5cebe03 scope link

Could it be that there is something wrong in how lxd is setting up routes on the host?

And these are routes in the container:

default via dev eth0 dev eth0 scope link

@simos There is one error in this configuration: /etc/resolv.conf shouldn’t be modified like that, because it will be overwritten after restarting resolvconf service. Better way would be to use:

    - echo 'nameserver' > /etc/resolvconf/resolv.conf.d/tail
    - systemctl restart resolvconf

The routes in the container are correct. The routed NIC type sets up a link-local default route just to get the traffic from the container to the host.

After that it isn’t LXD’s responsibility anymore and its up to the host’s routing table to route traffic as needed (as it assumes that the host is setup to have external connectivity like lxdbr0 does).

I’m a bit confused why you seem to have a default route on the host to via dev eth0, if that isn’t your gateway, then that is likely causing the issue, as your LXD containers are being published on eth1.

Although that would explain why enabling NAT helps, as if traffic is going out of eth0, then its likely the gateway at won’t know how to route return traffic from (and thus hiding it behind to host’s IP on eth0 helps).

Ooooh I think everything jumped into place now. As I mentioned in referenced post at the beginning, I’m running this in Vagrant, so it probably has its own network set up, and is its default gateway. Interesting.

1 Like

You should decide how you want your network routed, and then if you need to enable NAT, only do it on the external interface and not the internal one.

Yeah, I will look into how networking in Vagrant/VirtualBox works, and try to write up the full instruction how to make it to work correctly in Vagrant, because it’s often a good place to test such things like lxc/lxd. Thank you for your help, I hope I will be able to contribute back :slight_smile: .

Thanks! :slight_smile:

@tomp Just FYI I described the final working setup here: I hope it will be helpful for someone.

1 Like

Thanks, I updated the post accordingly.