Sharing VLAN with OCI and System Containers

My plan is to have an Incus setup with multiple VLANs - one per project. Within each project I expect to use OCI containers and LXC/VMs. I would therefore want a VLAN (say, VLAN 10) to be accessible to both types. My plan was to set-up an interface on the host with an IP address within VLAN 10 (say eth0.10 to 192.168.10.5) , to which OCI containers can either forward or proxy; then connect an Incus network as a “physical” interface type to this eth0.10 host interface. I can’t use a “macvlan” interface type as that doesn’t allow a system container/VM to connect to the host interface.

This works, until I start an LXC container which is connected to the VLAN 10 Incus network, at which point the host drops the eth0.10 interface completely.

The only way to get it back is to stop the LXC container and ifdown eth0.10 and ifup eth0.10 the host’s interface, at which point it reappears (until I start the LXC container again).

Is this expected behaviour?

For closure, and for the benefit of anyone landing here - I found the way to do this in this other answer by @stgraber.

I’m a Debian user and use the old fashioned /etc/network/interfaces for my networks, so I have:

auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet manual

auto br0
iface br0 inet static
  bridge-vlan-aware yes
  bridge-ports enp0s31f6
  bridge-vids 1 10
  address 172.29.1.20/24
  gateway 172.29.1.1

auto enp0s31f6.10
iface enp0s31f6.10 inet manual

auto br10
iface br10 inet static
  bridge-ports enp0s31f6.10
  address 172.29.10.5/24

VLAN 1 is the management port of the box and VLAN 10 is for application data.

In the container’s configuration YAML I have:

devices:
  ...
  eth0:
    nictype: bridged
    parent: br10
    type: nic

Oddly enough, I couldn’t get this to work by ditching the br10 bridge completely, changing parent: to br0 and adding the vlan: '10' tag. That’s not a major issue for my setup though as I’ve only a few VLANs.

What’s more, I can apply the same configuration to an OCI container’s YAML and bring that up on the VLAN without needing to resort to forwards or proxies.