Bridging two vlans in a container causes break in networking

Hey everyone,

I had something peculiar happen today when I tried to bridge two vlans together in an LXC container and later a VM. As far as I understand it, L2 bridging between vlans should be perfectly fine as long as there isn’t a router or competing broadcast service like DHCP that would cause issues. I did it with my vlan 40, which is my standard devices network, and vlan 310, which only exists on the switches and didn’t have anything else on it. When I bridged these two it seemed as though I caused a broadcast storm but that shouldn’t have been the case. (It may have been caused by the fact my router is virtualized on the same host that I did this on, but it still shouldn’t have had this effect.) The goal that I had with this was to create a “VWire”/L2 firewall so I can monitor and block traffic at layer 2 to be a transparent firewall to see if I could replicate the functionality of PaloAltos vwire. My issue is that I treat my switching as a fabric and as far as I knew it wouldn’t break anything.

My networking setup is done via vlan filtering in networkd:

10-enx08bfb85648e8.network:

[Match]
Name=enx08bfb85648e8
Type=ether

[Link]
MTUBytes=1500

[Network]
Bridge=br10

[BridgeVLAN]
VLAN=2-2000

20-br10.netdev:

[NetDev]
Name=br10
Kind=bridge

[Bridge]
DefaultPVID=1
VLANFiltering=true
STP=false

20-br10.network:

[Match]
Name=br10

[Link]
MACAddress=d8:9e:f3:79:74:5e
MTUBytes=1500

[Network]
VLAN=direct
Address=10.0.0.10/24
Gateway=10.0.0.1
DNS=10.0.10.10
DNS=10.0.10.11
Domains=greene.l iot.l dns.l

[BridgeVLAN]
VLAN=2-2000

VyWireTest (Where I bridged the networks)

architecture: x86_64
config:
  image.description: VyOS 1.5-rolling-202406161315
  image.os: VyOS
  image.release: 1.5-rolling-202406161315
  limits.cpu: "3"
  limits.memory: 2GiB
  security.secureboot: "false"
  volatile.base_image: 906205c4c5dee3286398daa92f9db1bdc596f1b918e9d13126d01928af2a04df
  volatile.cloud-init.instance-id: 9a252a7e-bd08-417f-891b-80f0fc301de2
  volatile.eth1.host_name: tap29cad870
  volatile.eth1.hwaddr: 00:16:3e:84:ee:b5
  volatile.eth40.host_name: tapd6bd0df3
  volatile.eth40.hwaddr: 00:16:3e:b7:30:53
  volatile.eth310.host_name: tapdc2525ad
  volatile.eth310.hwaddr: 00:16:3e:98:96:61
  volatile.last_state.power: RUNNING
  volatile.uuid: 9c5185f3-e364-4020-8094-9d2fef51cbfb
  volatile.uuid.generation: 9c5185f3-e364-4020-8094-9d2fef51cbfb
  volatile.vsock_id: "3679088025"
devices:
  config:
    source: cloud-init:config
    type: disk
  eth1:
    mtu: "1500"
    nictype: bridged
    parent: br10
    type: nic
  eth40:
    mtu: "1500"
    nictype: bridged
    parent: br10
    type: nic
    vlan: "40"
  eth310:
    mtu: "1500"
    nictype: bridged
    parent: br10
    type: nic
    vlan: "310"
  root:
    path: /
    pool: default
    size: 2GiB
    type: disk
ephemeral: false
profiles:
- defaultVyOS
- vlan1
- vlan40
- vlan310
stateful: false
description: ""

I figured out my issue. Spanning tree was the culprit in the end as it was shutting down the vlans on the port the VM Host was connected to. Fixed by setting spanning-tree bpdufiltering enable on the cisco switch port.

2 Likes

What’s this? I don’t find any reference to this on the man pages or the internet with a quick search.