Ah interesting, but lets back up for a moment and review what you are trying to achieve.
The default networking mode in LXD is to setup a virtual bridge (or switch) called
lxdbr0 that provides a private DHCP and DNS server (via dnsmasq). Containers then use the
bridged NIC device that is in the default profile and get allocated either a dynamic or static IP via DHCP. The lxdbr0 network also sets up NAT rules to allow the containers to get outbound external access by ‘hiding’ behind the host’s IP and MAC address.
This is usually enough to get up and running. But sometimes you might want the containers to actually ‘join’ the external network as if they were real machines on the network. In this case using NAT is not desirable as it masks all of the containers behind the hosts IP.
In this case there are a few options available to get the containers to join the external network without NAT.
- Convert your host’s external interface into a switch/bridge (e.g. br0) and move the host’s IP addressing to that interface and add the host’s external interface to the br0 bridge. Then have LXD containers also connect to that bridge, effectively joining them to the external network at layer 2. You can do this using
lxc config device add <instance> eth0 nic nictype=bridged parent=br0. Your containers would then rely on DHCP and DNS services from the external network as if they were real machines, and each would have their own MAC address on the network.
- Converting your host’s external interface and addressing to a bridge can be complex, especially if being done on a remote machine, as it can involve losing network access. Instead another option is to use the
macvlan NIC type with a parent of the external interface. This achieves a similar solution to the external bridge above, without setting up another bridge. However it comes with a restriction that the containers are not allowed to communicate with the host. This may or may not be a problem depending on your use case.
- Some external networks do not allow a single host to use more than 1 MAC address per port. In this case, using an external bridge or macvlan is not possible because each instance would have its own MAC address. In these cases we can use either
ipvlan NIC types. The latter having a similar restriction to
macvlan that prevents communication with the host. But both of these types allow specific designated IPs to be advertised as being owned by the host’s MAC address onto the external network and then routed into instances without needing to use NAT.
So with that background, you can perhaps see why I am a little confused why you are enabling NAT on your LXD host, as the primary reason to use
routed NICs is when you don’t want to use NAT.
The way that
ipvlan advertise their IPs onto the network is using proxy ARP/NDP (also called Neighbour Proxy), it effectively asks the host to claim ownership of the IPs onto the external network, which is why the instances using these NIC types need to have static IPs assigned, and why, as a convenience, LXD preconfigures the IPs on the interfaces (as DHCP won’t work).
The vast majority of users will use DHCP with bridged or macvlan networking.
Different distros use different networking configuration systems with their own subtle behaviors, and using cloud-init further introduces additional restrictions.
In this case it appears that Debian’s network setup doesn’t like it if IPs are preconfigured, and cloud-init (to my knowledge anyway) doesn’t offer a way to clear the IPs before running the network setup. However perhaps there is a pre-script which can be run to achieve this.
Another option would be to extend LXD with an option to not preconfigure the IPs on the interface when passing into the instance, and let it up the network config inside the instance.
One thing to point out though is that, as we’ve seen, unlike Ubuntu with netplan, Debian does not clear IPs on the interfaces before setting it up. This means that in actual fact we do not need cloud-init to configure networking, its already set up. All we need to do is set the DNS servers. So if cloud-init can be configured to just setup DNS servers then that should suffice.