Until now I manage to get teaming and vlan tagged interfaces on the hosts and containers working on Debian 10 doing as follow:
Host config:
auto bond0
iface bond0 inet manual
pre-up /usr/bin/teamd -d -o -g -t bond0 -f /etc/network/teamd-bond0.conf
post-down /usr/bin/teamd -k -t bond0
post-up ip link set bond0 up
auto bond0.1234
iface bond0.1234 inet manual
post-up ip link set bond0.1234 up
auto lxcbr1234
iface lxcbr1234 inet manual
bridge_ports bond0.1234
I’m not sure I can do that, I need it for the host server.
In addition to that, I can have several containers on the same host which need to access the same vlans.
auto bond0
iface bond0 inet manual
pre-up /usr/bin/teamd -d -o -g -t bond0 -f /etc/network/teamd-bond0.conf
post-down /usr/bin/teamd -k -t bond0
post-up ip link set bond0 up
auto lxcbr0
iface lxcbr0 inet manual
bridge_ports bond0
post-up ip link set lxcbr0 up
auto lxcbr0.1234
iface lxcbr0.1234 inet static
address 172.16.12.34
netmask 255.255.255.0
gateway 172.16.12.1
Then I tried same config but with link = bond0, container starts and is reachable from outside the host server but it killed the connection to the hosts…
It looks like you’re right.
I just tried to start another container on the host copying the 1st one, I just changing the IP address, and it doesn’t start.
I think the best approach here is to create the vlan interface on the host manually (as you already are), and then use the macvlan NIC type to connect the container to that interface.
I’m not sure to understand the benefit of using macvlan comparing to my initial setup with bond0 + bond0.1234 + lxcbr1234.
Why changing to macvlan if I still have to setup intermediate vlan tagged interface on the host ?
It just removes the need for an interim bridge, and directly connects the container to the vlan interface (fewer ‘moving parts’ and better performance). But both approaches would work fine, so if it fits your needs then stick with the bridge for sure
Where are you trying to reach the container from? With macvlan it doesn’t allow the container to communicate with the host, only the external network (this is a built-in security feature/limitation of macvlan NIC type).
Have you done packet traces on the external interface, vlan interface and container interface to see where packets are going? Have you got any firewalls on your host?
Interesting.
I can reach the host server on vlan 1234 from any of our servers inside and outside vlan 1234
I can reach the container with address 172.16.12.56 from other servers but only inside vlan 1234, if I try from a server outside vlan 1234 I get no answer.
NB: with my initial setup bond0 + bond0.1234 + bridge, I don’t have this issue
That’s what I would expect, that the server only exists inside vlan 1234? Otherwise the vlan is not serving its purpose of isolating traffic.
As for reaching the host’s vlan1234 IP from other non-vlan 1234 ports, I suspect you’re getting caught out by Linux’s default behaviour of replying to ARP requests for the host’s IP on any interface.
But at this point I’m starting to question whether you really want/need to use vlans, if you don’t want to isolate your traffic from the main network.
Well, I just wanted to focus on the management vlan we use for hosts/containers.
Once working, the next step is to reproduce the setup to add several other vlans for other purpose.
We use several management vlans for servers and containers depending on their physical location (let’s say vlan 1234 and 5678).
In addition to that , we have other vlans providing some services we want to setup in containers.
Management vlans do talk to each others thanks to routing rules managed at Network level while services vlans are isolated from any other vlans.
with veth:
from 1234 we can talk to servers and containers on 5678
from 5678 we can talk to servers and containers on 5678
with macvlan:
from 1234 we can not talk to servers and containers on 5678
from 5678 we can talk to servers and containers on 5678
Until now we only talked of configuring a management vlan (1234) on both host server and a container.
Once this will be working, I think it will pretty easy to duplicate the configuration for the services vlans.
So the only way I know how that should work is if the the default gateway IP on vlan 1234 is able to route packets back into vlan 5678, and handle return packets. So the actual hosts on each vlans aren’t communicating directly but routing through a central router between subnets.
When you say “managed at the network level” what specifically do you mean by that?