Containers w/ multiple NICs, routing, addressing

Hi!

I have a 3 node LXD cluster set up and running w/ OVN overlays. What I’m after is provisioning both containers and VMs that may have 2 or more connections to these overlays or a physical bridge.

To start, I have two profiles that define the interfaces into two differnent OVN networks as follows:

Internet:

config: {}
description: internet
devices:
  internet0:
    name: internet0
    network: ovn-internet0-0
    type: nic
name: net-ovn-internet0-0

internal:

config: {}
description: mgmt
devices:
  mgmt0:
    name: mgmt0
    network: ovn-mgmt0-0
    type: nic
name: net-ovn-mgmt0-0

I launch a container and apply these profiles with:
lxc launch ubuntu:22.04 -p net-ovn-internet0-0 -p net-ovn-mgmt0-0 test01

The container launches successfully, however it only one interface (the first profile applied perhaps?) will acquire an IP address. Investigating I find that /etc/netplan/50-cloud-init.yaml only references one interface:

network:
    version: 2
    ethernets:
        internet0:
            dhcp4: true

If I add the second to that (or in an override file e.g. 60-override.yaml) and apply the second interface will come up and work as expected.

Question #1: Is there some way that I missed to do this solely without netplan adjustment?

Next topic is routing - I’d like the hosts to have specific routes for one of the networks, and a default out the other. With dhcp I only get a default. I tried adding specific routes by adding ipv4.routes to the internal interface profile applied to the container (mgmt0) but this appeared to cause some problems with the OVN network, almost as if that route was placed in the OVN routing config with a bad next hop (possibly back to the host?). Even after deleting all hosts with that profile attached the destination appeared to continue to be black holed - the only way I was able to fix it (not knowing much at all about OVN/OVS) was to delete the ovn network in lxd and recreate it. I did not test this thoroughly so I admit I’m speculating as to what is happening.

Question #2: is adding what is the best way to tell an instance of specific routes that will apply to the instance only?

Question #3: I’ve noticed that with VMs the interfaces are configured but named differently. They behave similarly though. Any way to persist the interface names that are in the profiles?

Thanks!
Greg

1 Like

LXD images only have eth0 configured by default to use DHCP/SLAAC.
If you want custom configuration then you can pass cloud-init config from LXD (assuming you’re using an instance image that containers cloud-init, such as one of our /cloud variant images, e.g. images:ubuntu/jammy/cloud or ubuntu:22.04 which includes cloud-init by default.

See Linux Containers - LXD - Has been moved to Canonical for more info.

Again if you have multiple interfaces, you’ll need to use manual configuration for at least one of the interfaces and not use DHCP otherwise you’ll end up with 2 default routes. This will be bad.

Suggest disabling DHCP on one of the networks and use static config from the cloud-init config.

At this time LXD doesn’t provide the ability to say “I want DHCP, but I don’t want a default route added”. It also doesn’t have the ability to specify custom static routes in the DHCP response.

You could open an issue here about that: Issues · lxc/incus · GitHub

For now it must be done manually, or through cloud-init config in the profile/instance config.

One thing that isn’t clear with a quick look through the OVN manual (Ubuntu Manpage: ovn-nb - OVN_Northbound database schema) is whether similar options exist for IPv6.

Yes this is because LXD doesn’t have direct control over the interface name selection for VMs (as they are running their own guest operation system). However we do set a specific PCIe bus position which then usually translates into enp5s0 in modern Linux guests. This is what the LXD image templates expect.

We do have the lxd-agent process that runs inside the VM guest and communicates with the LXD server on the host. This is what allows lxc exec to work for VMs. And it also allows sharing configuration info between LXD server and the guest.

Because of this we do have a agent.nic_config setting which can be set to true in a profile/instance (see Linux Containers - LXD - Has been moved to Canonical).

This will tell the lxd-agent to try and rename the network interface to the name specified in the LXD instance config. This may or may not work depending on how the network is being configured inside the guest, as sometimes this can cause race conditions with the OS trying to setup the network interface itself.

Thanks. Good info above!

RE: My Q#1 above, adding via a profile w/ a bit of cloud-init and netplan magic gets both interfaces to dhcp without the need for placing a file on the instance. Great!

Adding to the cloud-init magic I mentioned above, I can get this to work with a caveat that is a chicken/egg scenario. To put a route in, you need to know the next hop. To know the next hop, either you need prior knowledge of the network(s) on which the instance is going, or you need some way of grabbing the dhcp provided defaults and using them as hints. Maybe this is something that can be whipped up with a good netplan config. So far I’m not seeing it but maybe its there.

I’m glad you pointed out the OVN link. I agree it’s not clear on what it will do for IPv6. Either way, dhcp control over host routes seems like a nice feature even if limited to v4 for now.

Thanks!

1 Like