Greets - I’ve been having a positive time working with lxc/d in test cases, and have a real-world application for it now. One of my dns setups is very lightweight, but I wish to individualize the instances to separate subnets - in baremetal world, this would require a separate server for each instance and in many cases one might choose a stack of raspberry pis or similar, but containerizing is the path I would prefer to go since I have available blades in my old stack that can do this. My scaling is preferably to handle some 30 different subnets, which in my case turns out to be about 8 containers since some subnets can use the same dns server (some are very unique, due to what they allow/block/resolve, some are generic, some subnets may point right out to public google or similar). But this post isn’t about dns, it’s about how to most easily get the host<->container setup going given multiple vlans.
So let us expect lxd 3.18 on ubuntu 16 as host, with 2 physical interfaces, one as a .1q trunk for the containers and one as an access port to the management vlan and a static ip in that subnet.
I don’t require natted subnets between the containers and a single interface, and to put in the static routing for what would really be /32 subnets in the middle as a bridge seems excessive because only one container in the host group will access any particular vlan in my deployment. This is where my reading of other posts and research has become confusing to me - most descriptions are expecting multiple containers to attach to a single interface/subnet, in my case each container will uniquely bind to a single interface (or subinterface to be exact).
Is there such a configuration where I would have [intro example] 4 defined subinterfaces in the host /etc/network/interfaces config (or vlan config if you prefer) and 4 network profiles of which each profile is bound directly to a subinterface, and the container would be provided a single static ip in that vlan? Each profile would thus be unique to a container.
If for instance I had vlans on my real logical network as:
vlan 201: 172.16.0.0/28 container at 172.16.0.2,
vlan 202: 172.16.0.16/28 container at 172.16.0.18,
vlan 203: 172.16.0.32/28 container at 172.16.0.34,
vlan 204: 172.16.0.48/28 container at 172.16.0.50,
and I wanted the subinterface on the host .201 to effectively non-nat (or nat 1:1) with all ports fully routed through…what would I look at doing?
I’ve seen posts on openvswitch and netplan that seemed to make the situation more difficult, since they seem to expect that I need to use a single interface to manage both the containers and the host. Since I have multiple physical nics in the blade, I can dedicate the second as host management.