Slow performance with nested bridged interface

On my HP Proliant microserver gen 8 box I have Ubuntu 18.04 running as host OS. Initially I was planning to setup one production VM (vm-prod) with multiple services inside lxd containers (vm-prod-lxd-01, vm-prod-lxd-02 etc.) inside. Another VM (vm-dev) would have same structure (vm-dev-lxd-01, 02, etc) but will be used as staging environment to test things before they were deployed into new prod containers. I also planned to spin some more VMs to test out completely new architectures.

When I deployed the production VM all went fine and I got acceptable performance. But when I setup second (dev) VM, performance hit was so severe, even the ssh sessions in terminal across all machines were lagging. I suspect it might be related to my particular network setup. Since all of the systems (host, VM, container) are Ubuntu 18.04, I paste netplan settings below:

HOST OS

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: 
        - eno1
      dhcp4: no
      dhcp6: no
      addresses: [192.168.1.100/24]
      gateway4: 192.168.1.200
      nameservers:
        addresses: [192.168.1.200]

VM:

network:
  version: 2
  renderer: networkd
  ethernets:
    ens3:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: 
        - ens3
      dhcp4: no
      dhcp6: no
      addresses: [192.168.1.110/24]
      gateway4: 192.168.1.200
      nameservers:
        addresses: [192.168.1.200]

container(s):

network:
    ethernets:
        eth0:
            addresses:
            - 192.168.1.120/24
            dhcp4: false
            gateway4: 192.168.1.200
            nameservers:
                addresses:
                - 192.168.1.200
    version: 2

I’ve chosen to use bridge networking to have separate IPs on same network subnet for all machines.
Almost all services are inside prod VM lxd containers, and they are, with exception of one xmpp server, webapps.

The question is:

  1. Was the choice of nesting bridged interface a poor one in terms of I/O? If not, what may cause said performance issues? What tool should I use to measure it?
  2. If bridged interface is not good performance-wise, what type of configuration should I use instead? IP tables with port forwarding? Lxd port forwarding?
  3. The server has two NICs - is using another bridged interface for other VMs a solution?

in short: you configured a huge diversion loop into a gordon knot. That’s why your g8-box goes on its knees.

The long way
There is no need for bridging in this case. Bridging works on Layer2 (OSI: OSI model - Wikipedia) while IP-Layer is on OSI-Layer3. Mainly use case for bridges is to bridge from one net to the other on a physical hardware point of view (POV), i.e. if you have different hardware topology. Lets say, you have one host with two interfaces. First is an Ethernet interface, second one is a 10BASE-2. This means Layer2 has no idea of Layer 3 and in vice versa. If you want to deal with IP-addresses you can use CIDR (Classless Inter-Domain Routing - Wikipedia) for subneting (Useful tool in in GIP. Just apt install -y gip).

In your use case you have an physical eno1 ethernet interface. From an ingress POV you only have ONE interface and on this you have already set up a bridge. Why? Further you gave it the same name as the nested. Why? Generally always try to avoid naming two things the same.

Change perspective
On your gen8-box you have to set up an IP-range on that interface which is sharing it with your VMs. So your box is the router for your VMs. The VMs themself must point to gen8-box interface (Gateway! == default route). Within each VM you have to set up an IP-address for VMs ens3. This is the public one which is talking to the world over your host-box and in vice versa is be reachable from outside. Within each VM now you can install LXD the common way (managed interface). Let LXD create a bridge for your containers which enables each container to talk over the lxdbr0.

If you want to set up an unmanged bridge you have to configure IT in your /etc/network/interface at your VM AND NOT at the host-level. For that bridge you have to declare an public IP-range which contains the IPs of the lxd-containers. Therefore these get an offical IP-address which are reachable from outside directly. After that create your containers, step into every one and configure each ones /etc/network/interface manually as well.

If you search within this forum you will find how to do exactly.

Hope it helps.

If I understand you correctly, you have
1 physical host
1 VM (let’s say KVM or whatever)
1 LXD server with N containers
1 VM (idem)
1 LXD server with N containers
and you bridged together all these containers ?
If this is it, I’m afraid that you can’t do that.
If you look at the MAC address for containers, for example with
lxc list -c n,volatile.eth0.hwaddr:MAC
[I have not come up with this stuff by myself, it’s straight from the LXD code, Mac address is not a standard column that you can get with lxc list -c]
You’ll see that the algorithm used by lxd to generate mac addresses is quite simple; take the prefix assigned to (lxd ? lxc ? I don’t know), that is 00:16:3e and add a random value of 3 bytes that don’t already exists.
Since you are bridging 2 LXD networks, you are risking MAC address conflicts. It’s deadly. You can’t.
Anyway, it’s not a good design IMO to expose test computer and networks to the general public, not on the same network as production. You risk real users (customers…) connecting to tests computers if a mistake happens at some level. This is really not a good ™ thing to risk.

I totally agree. But in some circumstances he could do that indeed. He might solve his problem by using the second nic for a perimeter network [https://en.wikipedia.org/wiki/DMZ_(computing)] which would enabling his test-environment.