.1q trunk on host, bind single subinterface address on container?

Greets - I’ve been having a positive time working with lxc/d in test cases, and have a real-world application for it now. One of my dns setups is very lightweight, but I wish to individualize the instances to separate subnets - in baremetal world, this would require a separate server for each instance and in many cases one might choose a stack of raspberry pis or similar, but containerizing is the path I would prefer to go since I have available blades in my old stack that can do this. My scaling is preferably to handle some 30 different subnets, which in my case turns out to be about 8 containers since some subnets can use the same dns server (some are very unique, due to what they allow/block/resolve, some are generic, some subnets may point right out to public google or similar). But this post isn’t about dns, it’s about how to most easily get the host<->container setup going given multiple vlans.

So let us expect lxd 3.18 on ubuntu 16 as host, with 2 physical interfaces, one as a .1q trunk for the containers and one as an access port to the management vlan and a static ip in that subnet.
I don’t require natted subnets between the containers and a single interface, and to put in the static routing for what would really be /32 subnets in the middle as a bridge seems excessive because only one container in the host group will access any particular vlan in my deployment. This is where my reading of other posts and research has become confusing to me - most descriptions are expecting multiple containers to attach to a single interface/subnet, in my case each container will uniquely bind to a single interface (or subinterface to be exact).

Is there such a configuration where I would have [intro example] 4 defined subinterfaces in the host /etc/network/interfaces config (or vlan config if you prefer) and 4 network profiles of which each profile is bound directly to a subinterface, and the container would be provided a single static ip in that vlan? Each profile would thus be unique to a container.

If for instance I had vlans on my real logical network as:
vlan 201: container at,
vlan 202: container at,
vlan 203: container at,
vlan 204: container at,

and I wanted the subinterface on the host .201 to effectively non-nat (or nat 1:1) with all ports fully routed through…what would I look at doing?

I’ve seen posts on openvswitch and netplan that seemed to make the situation more difficult, since they seem to expect that I need to use a single interface to manage both the containers and the host. Since I have multiple physical nics in the blade, I can dedicate the second as host management.

Kindest regards,

Why not just route via the host to the subnets, the hosts outside address would be the next-hop and the containers live on bridges (and small subnets) behind that ip?

The way you are doing it with vlans is possible but its means using openvswitch, setting up a trunk, tagging vlans, creating multiple bridges on the host and plumbing them into the trunk, some of my past posts have the config for this if you search you could probably hack what I’ve setup to do what you want, it always seems more painful than its worth though doing it that way.

There maybe some newer ways to do this in the future, I think theres talk of host ip routed containers coming soon, not sure if this could be used?


another way that springs to mind is you could route down sub interfaces, have multiple point to points terminating at the external interface, easy to set these up with FRR routing using interface ens.X and ens.Y where x and y would be the vlan number (e.g. ens.100 for vlan 100). It would also assuming routing though and not a vlan bridged into the container bridge.