Changing ipvlan.mode to l2

here is the setting of the nic (LXD 3.18)

devices:
  eth1:
    nictype: ipvlan
    parent: eth1
    type: nic

ipvlan.mode is l3s by default

# cat /var/snap/lxd/common/lxd/logs/ubu1/lxc.conf | grep ipvlan
lxc.net.1.type = ipvlan
lxc.net.1.ipvlan.mode = l3s
lxc.net.1.ipvlan.isolation = bridge
 
# printf "lxc.net.1.ipvlan.mode = l2" | lxc config set ubu1 raw.lxc -
Error: Invalid config: Only interface-specific ipv4/ipv6 lxc.net. keys are allowed

is there way to change it to l2 mode?

Hi this is not currently possible, although it would be possible to add support for.

Please can I ask what scenario you are operating in that requires l2 ipvlan mode rather than l3s?

Thanks
Tom

the distributed virtual switch of the vm platform only allows one MAC address coming in/out from the virtual port and i want to have multiple containers running on the host under the same subnet.

i can get around the limitation by setting up linux bridge and some ebtables nat rules to have containers connected within the host. then i learned about ipvlan and think it is a better solution for my use case.

i’ve tried creating ipvlan l2 interface on the host first and passed it to container as phy interface, that worked. but, managing such ipvlan l2 interfaces on the host is still a bit troublesome. if lxd supports ipvlan l2 mode, it will be very neat.

thanks

OK thanks, and what specifically is the need for l2 mode rather than l3s?

the containers are running on a LXD cluster, it’s not easy to route specific addresses to corresponding cluster members. moreover, there are slight differences on the route table of containers, it is not so scalable to handle routing on the host machine if l3 mode is used.

to have ipvlan l2 mode working makes thing clean, everything looks almost the same as having a linux bridge or OVS on the host machine.

thanks

Thanks that makes sense, it would allow you to define the IPs inside the container without LXD knowing about them, also if using a DHCP client that supports using client IDs, DHCP could also potentially work too.

You mention that setting up routes is a challenge, however in LXD’s ipvlan l3s mode, we enable proxy ARP/NDP on the specified parent interface (we add a static route to the host’s loopback interface to activate proxy ARP/NDP for the container’s IPs). This means that the wider LAN does not need to route traffic at L3 to the host, and instead the host will respond to ARP/NDP queries for the container’s IPs in order to direct traffic to the host. So you would not necessarily need to configure static routes.

One thing that l3s mode won’t support is DHCP, even when using Client ID, so as long as you don’t need DHCP, then you may be able to still use l3s for your scenario.

thanks a lot. the proxy arp does help to solve the routing issue of the inbound traffic to the containers.

however, mixing the containers traffic(services traffic) with the host machine own traffic(management traffic) is still not optimal. the management traffic needs need to be private while the services traffic probably needs to be quite opened.

i have no idea how difficult it will be to have ipvlan l2 mode working, any pointer where i should look at to see if it’s a task i can manage?

thanks again

It would not be particularly difficult to add l2 support to LXD as the underlying liblxc supports it.

This is the part where the liblxc config is generated:

This is the l3s part here: https://github.com/lxc/lxd/blob/master/lxd/device/nic_ipvlan.go#L154

If you were using l2 instead, then you would not want this bit: https://github.com/lxc/lxd/blob/master/lxd/device/nic_ipvlan.go#L156

Also we would not need to use proxy arp/ndp, so this function would not need to be called: https://github.com/lxc/lxd/blob/master/lxd/device/nic_ipvlan.go#L189-L215

Also some modification to the config validation would need to be done to introduce a new config key, suggest mode which would default to l3s if not specified, but could be set to either l3s, l3 or l2.

Finally the docs would need updating:

And an API extension added at the bottom, similar to:


that’s cool. great thanks!