LXC vlan interface on host with teaming

Hi there,

Until now I manage to get teaming and vlan tagged interfaces on the hosts and containers working on Debian 10 doing as follow:

Host config:

auto bond0
iface bond0 inet manual
    pre-up /usr/bin/teamd -d -o -g -t bond0 -f /etc/network/teamd-bond0.conf
    post-down /usr/bin/teamd -k -t bond0
    post-up ip link set bond0 up

auto bond0.1234
iface bond0.1234 inet manual
    post-up ip link set bond0.1234 up

auto lxcbr1234
iface lxcbr1234 inet manual
     bridge_ports bond0.1234

Container config:

lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = lxcbr1234
lxc.net.0.ipv4.address = 172.16.12.56/24
lxc.net.0.ipv4.gateway = 172.16.12.1

and it perfectly works.

Now, I would like to get rid of this bond0.1234 and use vlan interface in the container instead.
I tried different ways but none of them works.

Here the container doesn’t start:

Operation not permitted - Failed to create vlan interface “vlan1234-0” on “bond0”

 lxc.net.0.type = vlan
 lxc.net.0.flags = up
 lxc.net.0.link = bond0

Here the container starts, interfaces are UP but I cannot ping the container’s IP:

auto bond0
iface bond0 inet manual
    pre-up /usr/bin/teamd -d -o -g -t bond0 -f /etc/network/teamd-bond0.conf
    post-down /usr/bin/teamd -k -t bond0
    post-up ip link set bond0 up

 auto lxcbr0
 iface lxcbr0 inet manual
     bridge_ports lxcbr0
---
 lxc.net.0.type = vlan
 lxc.net.0.flags = up
 lxc.net.0.link = lxcbr0

Is what I am looking for doable ? Do I miss something ?

thanks in advance for your help.

Jeremy

Does it work if you remove the bond0.1234 interface on the host. I’m not sure if you can have multiple interfaces using the same vlan.

I’m not sure I can do that, I need it for the host server.
In addition to that, I can have several containers on the same host which need to access the same vlans.

Did you specify the vlan ID in the LXC config and just miss it off of what you posted here?

ah yes, I missed it in the 1st post.

I just tried this, host:

auto bond0
iface bond0 inet manual
  pre-up /usr/bin/teamd -d -o -g -t bond0 -f /etc/network/teamd-bond0.conf
  post-down /usr/bin/teamd -k -t bond0
  post-up ip link set bond0 up

auto lxcbr0
iface lxcbr0 inet manual
  bridge_ports bond0
  post-up ip link set lxcbr0 up

auto lxcbr0.1234
iface lxcbr0.1234 inet static
  address 172.16.12.34
  netmask 255.255.255.0
  gateway 172.16.12.1

container:

lxc.net.0.type = vlan
lxc.net.0.vlan.id = 1234
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.ipv4.address = 172.16.12.56/24
lxc.net.0.ipv4.gateway = 172.16.12.1

but container doesn’t start.

Then I tried same config but with link = bond0, container starts and is reachable from outside the host server but it killed the connection to the hosts…

It looks like you’re right.
I just tried to start another container on the host copying the 1st one, I just changing the IP address, and it doesn’t start.

I think the best approach here is to create the vlan interface on the host manually (as you already are), and then use the macvlan NIC type to connect the container to that interface.

I’m not sure to understand the benefit of using macvlan comparing to my initial setup with bond0 + bond0.1234 + lxcbr1234.
Why changing to macvlan if I still have to setup intermediate vlan tagged interface on the host ?

It just removes the need for an interim bridge, and directly connects the container to the vlan interface (fewer ‘moving parts’ and better performance). But both approaches would work fine, so if it fits your needs then stick with the bridge for sure :slight_smile:

I must have missed something else :frowning:

so as you suggested, I removed the bridge from the host config to only have the bond0 and the interface tagged for the host management:

auto bond0
iface bond0 inet manual
    pre-up /usr/bin/teamd -d -o -g -t bond0 -f /etc/network/teamd-bond0.conf
    post-down /usr/bin/teamd -k -t bond0
    post-up ip link set bond0 up  

auto bond0.1234
iface bond0.1234 inet static
    address 172.16.12.34
    netmask 255.255.255.0
    gateway 172.16.12.1

and set the container in macvlan mode:

lxc.net.0.type = macvlan
#lxc.net.0.vlan.id = 1234
lxc.net.0.flags = up
lxc.net.0.link = bond0.1234
lxc.net.0.ipv4.address = 172.16.12.56/24
lxc.net.0.ipv4.gateway = 172.16.12.1

Host is well reachable, container starts but is not reachable through 172.16.12.56.

Where are you trying to reach the container from? With macvlan it doesn’t allow the container to communicate with the host, only the external network (this is a built-in security feature/limitation of macvlan NIC type).

Good to know, I’m trying to ping from the host and from another one on the network and get no answers to both.

Have you done packet traces on the external interface, vlan interface and container interface to see where packets are going? Have you got any firewalls on your host?

There is no firewall rules.
Traceroute shows packets going to the host server.
SSH test says “No route to host”

Interesting.
I can reach the host server on vlan 1234 from any of our servers inside and outside vlan 1234
I can reach the container with address 172.16.12.56 from other servers but only inside vlan 1234, if I try from a server outside vlan 1234 I get no answer.

NB: with my initial setup bond0 + bond0.1234 + bridge, I don’t have this issue

That’s what I would expect, that the server only exists inside vlan 1234? Otherwise the vlan is not serving its purpose of isolating traffic.

As for reaching the host’s vlan1234 IP from other non-vlan 1234 ports, I suspect you’re getting caught out by Linux’s default behaviour of replying to ARP requests for the host’s IP on any interface.

But at this point I’m starting to question whether you really want/need to use vlans, if you don’t want to isolate your traffic from the main network.

Well, I just wanted to focus on the management vlan we use for hosts/containers.
Once working, the next step is to reproduce the setup to add several other vlans for other purpose.

I’m afraid I don’t understand.

If you connect a container to VLAN 1234, you should expect it to only be reachable from other hosts also in VLAN 1234.

Let me try to clarify.

We use several management vlans for servers and containers depending on their physical location (let’s say vlan 1234 and 5678).
In addition to that , we have other vlans providing some services we want to setup in containers.

Management vlans do talk to each others thanks to routing rules managed at Network level while services vlans are isolated from any other vlans.

with veth:

  • from 1234 we can talk to servers and containers on 5678
  • from 5678 we can talk to servers and containers on 5678

with macvlan:

  • from 1234 we can not talk to servers and containers on 5678
  • from 5678 we can talk to servers and containers on 5678

Until now we only talked of configuring a management vlan (1234) on both host server and a container.
Once this will be working, I think it will pretty easy to duplicate the configuration for the services vlans.

So the only way I know how that should work is if the the default gateway IP on vlan 1234 is able to route packets back into vlan 5678, and handle return packets. So the actual hosts on each vlans aren’t communicating directly but routing through a central router between subnets.

When you say “managed at the network level” what specifically do you mean by that?