3.19 and Routed networking mode configuration example needed

I am trying to configure the IP Routed NIC to use with some containers.

The only reference I could find with any example was in the Routed networking mode section:

So I stop my existing containers and per the example in the above URL…

I used the command:

$ lxc config device add cn1 eth0 nic nictype=routed ipv4.address=10.0.2.21
Device eth0 added to cn1
$ lxc start cn1
$ lxc list cn1

And the lxc list cn1 and it shows CN1 now has ip address 10.0.2.21.

However, I still have an t1-br bridge that CN1 was originally built to run under but now that t1-br Bridge takes on the IP address of 10.0.2.1

So I am befuddled and lost as to how to :wink:

The LXD container MUST already have been created before the IP NIC is applied to the container CN1.
If so then the LXD bridge (in my case t1-br) is still alive.

At least Release Note example commands lend me to think so as it only lists:

$ lxc start c1
$ lxc list cn1

So what does the LXD “profile” need to look like to use the IP NIC instead of a bridge ??

Is there a complete example to take a system that has been running LXD containers with the traditional lxdbr0 bridge through all that needs to be done to re-configure those existing containers to use the new Routed NIC ?

Hi @bmullan

So routed network mode works as follows:

  • It uses liblxc under the hood to create a veth pair of interfaces, and moves one side of the veth pair into the container.
  • It then preconfigures the static IP addresses you have specified onto the interface.
  • Next it sets up default routes in the container ponting to 169.254.0.1 for IPv4 and fe80::1 for IPv6. These are link-local addresses and serve only to get packets to/from the container to the host.
  • On the host, static routes are added for the container’s IP pointing to the host-side of the veth pair.
  • Additionally the link-local gateway IPs are created on every host-side veth pair.

This means you should avoid having any start up services inside the container that remove IPs from the interfaces and attempt to do DHCP. You should also be aware that if you are using static IPs from an existing DHCP range then unlike ‘bridged’ mode, these do not create static reservations in the parent bridged’s local DHCP server and so you should ensure you’ve configured the DHCP range to not overlap with the static IPs assigned.

Note: The routed network mode does not require you to specify a parent option. This means that the IPs you specify for the container do not have to be part of any subnet on the host, and you can instead choose to propagate these routes using a routing daemon.

However you can optionally specify a parent option, and in this case proxy ARP and proxy NDP entries are added to that parent interface “advertising” those IPs at layer2 to the parent interface. If those IPs are in the same subnet as the parent interface’s network, this then acts as a kind of ‘bridge’ allowing those containers to appear on the parent’s interfaces network. However you do not need to use any bridge, and all the containers will appear to be using the host’s parent interface’s MAC address.

I’m not sure what you mean about the IP for the bridged, 10.0.2.1, as routed mode will not alter the bridge IP, especially one it is not a parent of. They key point here is that routed mode doesn’t need any bridges to operate.

So if I start with a container config like so:

devices:
  eth0:
    ipv4.address: 10.138.198.132
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic

With the lxdbr0 interface having an IP of 10.138.198.1.

Then I can change the container’s nictype to “routed”:

devices:
  eth0:
    ipv4.address: 10.138.198.132
    name: eth0
    nictype: routed
    parent: lxdbr0
    type: nic

Starting the container then shows on the host:

ip r
10.138.198.0/24 dev lxdbr0 proto kernel scope link src 10.138.198.1 linkdown 
10.138.198.132 dev veth0ff2a75d scope link 

You can also see the container’s IP being ‘advertised’ via proxy ARP to the parent interface.

ip neigh show proxy
169.254.0.1 dev veth0ff2a75d  proxy
10.138.198.132 dev lxdbr0  proxy

And in the container you can see the default link-local routes created:

lxc exec c1 ip r
default via 169.254.0.1 dev eth0 
169.254.0.1 dev eth0 scope link 

Note: In this example, if I removed the “parent: lxdbr0” part from my container, then it would prevent other containers connected to the bridged from communicating with my container. This is because without the parent option the container is not advertised at layer 2 onto the lxdbr0 interface.

However the container would still be able to communicate with the host via its default routes.

2 Likes

There’s also some example usage on the PR that added the feature:

1 Like

So routed mode doesn’t need a bridge to operate, so you could remove the t1-br and the LXD managed network entirely if no other containers are using it.

1 Like

@tomp

Tom

Thank you so much. I’ve been wanting to try the Route NIC feature but there wasn’t enough info in one spot to answer the various questions.

This is great and thanks for taking the time to put it together. I’m sure a lot of others will appreciate this as well.

Brian

@tomp

Your statement:

“So if I start a container config like so”

Do you have an example for this using

$ lxc network edit rnic

Where rnic was previously created by

$ lxc network create rnic

And if I wanted a profile for this:

$ lxc profile copy default pr-rnic

What changes would need to be made to that profile?

$ lxc profile edit pr-rnic

Because the routed nic type doesn’t require an LXD managed network, I would do the following:

Assuming that:

  • Interface enp3s0 is a physical port connected to a network 192.168.1.0/24.
  • The default gateway on the physical network is 192.168.1.1.
  • An existing default profile has a bridged NIC connected to lxdbr0.

Copy the profile and remove the bridged NIC from new profile:

lxc profile copy default rnic
lxc profile device remove rnic eth0

Add a partially configured routed NIC to the profile, this can optionally have the parent specified or not.

lxc profile device add rnic eth0 nic nictype=routed parent=enp3s0

Now create a container from rnic profile, note I only init the container, I don’t launch as need to add IPs to it (the container will start without IPs though).

lxc init ubuntu:18.04 c1 -p rnic
lxc config device override c1 eth0 ipv4.address=192.168.1.200
lxc start c1
ping 192.168.1.200
lxc exec c1 ping 192.168.1.1

Finally the managed LXD network isn’t needed anymore (unless other containers are using it) so:

lxc network delete lxdbr0
1 Like

@tomp
Thank you. That explains alot of what I was seeing as I was missing some steps.

I use this configuration in LXD and netplan which works fine:

lxc config device add c1routed eth0 nic nictype=routed parent=enp3s0 ipv4.address=192.168.1.200

Note: The parent option is important if you are wanting to make your container appear to be on the host’s external network at the layer 2 rather than relying on the ISP routing traffic for your IPs to your host directly. You haven’t provided you LXD container config so I can’t tell at this stage.

Then in netplan:

network:
    version: 2
    ethernets:
        eth0:
          addresses:
            - 192.168.1.200/32
          nameservers:
            addresses: [8.8.8.8]
          routes:
            - to: 0.0.0.0/0
              via: 169.254.0.1
              on-link: true

Can you ping 8.8.8.8 from your container (you say “ping works fine” but don’t state where you are able to ping to).

If you are able to ping externally from your container, then the routed configuration is working and the most likely issue is a firewall on your host (that is preventing all routed traffic except ICMP ping) or on your wider network.

I am very sorry for the confusion. @tomp was kind enough to offer a very quick and comprehensive answer to my question, yet I discovered I had to delete this very question, because it included some network information I did not want to share. So this is a redacted version. If you are reading this thread, this is the question that tomp answered to in his comment above. I will study his answer and provide feedback in another comment, as soon as I find the time.

For me, this doesn’t happen. No routes are created inside the container. I used the commands you list below:

  1. lxc profile copy default rnic
  2. lxc profile device remove rnic eth0
  3. lxc profile device add rnic eth0 nic nictype=routed parent=ens3
  4. lxc init ubuntu:18.04 c1 -p rnic
  5. lxc config device override c1 eth0 ipv4.address=[MY-PUBLIC-IP]
  6. lxc start c1
  7. lxc exec c1 ip r

The last command turns up nothing. No routes are created inside the container. Can I set them up manually? Because when I use this as my /etc/netplan/50-cloud-init.yaml below, I can ping in and out, but nameserver resolution doesn’t work.

network:
    version: 2
    ethernets:
        eth0:
          addresses: [MY-PUBLIC-IP]
          nameservers:
            addresses: [8.8.8.8]
          routes:
            - to: 0.0.0.0/0
              via: 169.254.0.1
              on-link: true

And ip r only comes up like this:

default via 169.254.0.1 dev eth0 proto static onlink 

The second line, beginning with the IP, is missing.

I use Bionic and the LXD Snap, currently at 3.22.

This Netplan gives me the same result:

network:
    version: 2
    ethernets:
        eth0:
          addresses: [MY-PUBLIC-IP]
          gateway4: 169.254.0.1
          nameservers:
            addresses: [8.8.8.8]

I can ping fine, but name resolution doesn’t work. And “ip r” only produces this line, nothing, more:

default via 169.254.0.1 dev eth0

When I change /etc/netplan/50-cloud-init.yaml inside the container to the following, I still don’t have dns, but some changes:

network:
    version: 2
    renderer: networkd
    ethernets:
        eth0:
          addresses: [My-Public-IP/32]
          dhcp4: no
          nameservers:
            addresses: [8.8.8.8]
          gateway4: 169.254.0.1
          routes:
            - to: 169.254.0.1/32
              via: 169.254.0.1
              scope: link

“ip r” now results in this:

default via 169.254.0.1 dev eth0 
default via 169.254.0.1 dev eth0 proto static 
169.254.0.1 dev eth0 scope link 

But still no name resolution, even though it looks fine to me:

Link 73 (eth0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 8.8.8.8

journalctl says:

Using degraded feature set (UDP) for DNS server 8.8.8.8.

or

Using degraded feature set (TCP) for DNS server 8.8.8.8.

Btw. this is a KVM guest rented from Netcup, where I purchased additional IPv4, which Netcup itself says should be added like this: https://www.netcup-wiki.de/wiki/Zus%C3%A4tzliche_IP_Adresse_konfigurieren

I am impressed. Your guessed right. It was the firewall. Everything is working fine, when I disable the ufw firewall on the lxd host machine. Now I just have to figure out how to configure the ufw firewall. But this might be useful to others, since I don’t remember changing anything from the default Ubuntu Bionic configuration except for the prerouting rules.

1 Like