the distributed virtual switch of the vm platform only allows one MAC address coming in/out from the virtual port and i want to have multiple containers running on the host under the same subnet.
i can get around the limitation by setting up linux bridge and some ebtables nat rules to have containers connected within the host. then i learned about ipvlan and think it is a better solution for my use case.
i’ve tried creating ipvlan l2 interface on the host first and passed it to container as phy interface, that worked. but, managing such ipvlan l2 interfaces on the host is still a bit troublesome. if lxd supports ipvlan l2 mode, it will be very neat.
the containers are running on a LXD cluster, it’s not easy to route specific addresses to corresponding cluster members. moreover, there are slight differences on the route table of containers, it is not so scalable to handle routing on the host machine if l3 mode is used.
to have ipvlan l2 mode working makes thing clean, everything looks almost the same as having a linux bridge or OVS on the host machine.
Thanks that makes sense, it would allow you to define the IPs inside the container without LXD knowing about them, also if using a DHCP client that supports using client IDs, DHCP could also potentially work too.
You mention that setting up routes is a challenge, however in LXD’s ipvlan l3s mode, we enable proxy ARP/NDP on the specified parent interface (we add a static route to the host’s loopback interface to activate proxy ARP/NDP for the container’s IPs). This means that the wider LAN does not need to route traffic at L3 to the host, and instead the host will respond to ARP/NDP queries for the container’s IPs in order to direct traffic to the host. So you would not necessarily need to configure static routes.
One thing that l3s mode won’t support is DHCP, even when using Client ID, so as long as you don’t need DHCP, then you may be able to still use l3s for your scenario.
thanks a lot. the proxy arp does help to solve the routing issue of the inbound traffic to the containers.
however, mixing the containers traffic(services traffic) with the host machine own traffic(management traffic) is still not optimal. the management traffic needs need to be private while the services traffic probably needs to be quite opened.
i have no idea how difficult it will be to have ipvlan l2 mode working, any pointer where i should look at to see if it’s a task i can manage?
Also some modification to the config validation would need to be done to introduce a new config key, suggest mode which would default to l3s if not specified, but could be set to either l3s, l3 or l2.
Finally the docs would need updating:
And an API extension added at the bottom, similar to: