LXD networking with netplan, and vlans

networking

(John Ross) #1

My situation:
On my lxd server I have a single physical nic that is connected to a trunk line (vlans: 100, 110, 120, 130 – nothing untagged). I would like to have vlan 100 (management) connect to the host (and ideally not be accessible from the lxd containers). I would like to make various combinations of vlan 110, 120, and 130 available to lxd containers – where each of these containers can reach all the other containers on the vlans they share (access across vlans would be provided by a router with firewall rules that restrict / control / secure traffic between vlans).

In the lxd server netplan I have (lxd 3.13; kernel 5.0.0; ubuntu 19.04):

ethernets:
lan:
match:
macaddress: 00:e0:4c:68:99:fd
set-name: lan
dhcp4: no

vlans:
vmngt:
id: 100
link: lan
addresses: [ 192.168.100.0/24 ]

vdata:
id: 110
link: lan
addresses: [ 192.168.110.0/24 ]

bridges:
brmngt:
interfaces: [ vmngt ]
addresses: [ 192.168.100.2/24 ]
gateway4: 192.168.100.254
nameservers:
addresses: [ 192.168.110.20 ]
search: [ XXXX.net ]
parameters:
stp: false
forward-delay: 0

brdata:
interfaces: [ vdata ]
addresses: [ 192.168.110.2/24 ]
gateway4: 192.168.110.254
nameservers:
addresses: [ 192.168.110.20 ]
search: [ XXXX.net ]
parameters:
stp: false
forward-delay: 0

I have a profile to connect to the “data” vlan:

config: {}
description: Bridge to data vlan
devices:
eth0:
name: eth0
nictype: bridged
parent: brdata
type: nic
name: brdata
used_by:

  • /1.0/containers/test

and within that container I set up a netplan:

network:
version: 2
ethernets:
eth0:
dhcp4: no
addresses: [ 192.168.110.199/24 ]
gateway4: 192.168.110.254
nameservers:
addresses: [ 192.168.110.20 ]
search: [ XXXX.net ]

from the container I can ping to the vmserver, but not to other containers or other hosts on the network.

How ought I set this up to work correctly (i.e., so I can connect different containers to different combinations of bridges to connect them to different combinations of vlans on my lan).

Thank yo


(Idef1x) #2

I have been struggling with that as well and frankly gave up. I am using openvswitch now (with fake bridges) for that and got rid op netplan (using ifupdown) and that was pretty easy to set up.


(AS) #3

Not sure if this is helpful as I’m running on Fedora, but I got this working using NetworkManager

My host is connected to a trunk port with native vlan set and additional vlans trunked. Here I’m adding vlan102:

nmcli con add type vlan con-name VLAN102 dev enp3s0 id 102
nmcli connection modify VLAN102 ipv4.method disabled # disable interface for dhcp
nmcli con up VLAN102

This creates a vlan tagged sub interface on enp3s0:

ip addr sh enp3s0.102
7: enp3s0.102@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 70:85:c2:c8:5f:c9 brd ff:ff:ff:ff:ff:ff

and can be seen in the lxc networking as a vlan type

lxc network list
+------------+----------+---------+-------------+---------+
|    NAME    |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+------------+----------+---------+-------------+---------+
| enp3s0     | physical | NO      |             | 0       |
+------------+----------+---------+-------------+---------+
| enp3s0.102 | vlan     | NO      |             | 1       |
+------------+----------+---------+-------------+---------+
| no-dhcp    | bridge   | YES     |             | 0       |
+------------+----------+---------+-------------+---------+
| virbr0     | bridge   | NO      |             | 0       |
+------------+----------+---------+-------------+---------+

Finally attach the network to your default profile or whichever you use to deploy containers from

lxc network attach-profile enp3s0.102 default eth0

Profile should look like this:

lxc profile show default
config: {}
description: ""
devices:
  eth0:
    nictype: macvlan
    parent: enp3s0.102
    type: nic
  root:
    path: /
    pool: lxd-storage-pool
    type: disk
name: default

Deploy a container:

lxc list
+------------+---------+----------------------+------+------------+-----------+
|    NAME    |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+------------+---------+----------------------+------+------------+-----------+
| zoneminder | RUNNING | 172.16.102.11 (eth0) |      | PERSISTENT |           |
+------------+---------+----------------------+------+------------+-----------+

From this container I can now ping the host, gateway (it gets dhcp from the gateway) or whatever I choose or restrict it to.

I’ve now got a funny issue where when the container launches it retrieves a new lease each time and not the one I’ve configured with a static dhcp lease on the router. I ended up setting it manually in the container, but it annoys me it’s behaviour isn’t consistent.


(John Ross) #4

Thank you idef1x. I installed Open vSwitch. I can still use netplan to setup my mngt vlan (1 dedicated nic on an access line) and OVS with fake bridges to distribute the vlans to my lxd containers.

(Note to others since it’s easy to miss in documentation:
ovs-vsctrl add-br
ex: ovs-vsctl add-br br0.110 br0 110 (creates bridge for vlan 110 – untagged)

Thank you again


(Jon Clayton) #5

@JLR83

You can actually make the management network inaccessible to the containers

By default all interfaces can route to each other as they live in the default routing table.

You have to make use of linux VRF and/or multiple routing tables to get what you need for the L3 isolation for the management interface.

I would firstly install something to abstract away some of the more tedious linux networking, I usually use FRR v7 (Free range routing) you can install it via apt repos now… https://deb.frrouting.org/

This allows you to manage your interface IP addresses via a cisco-like interface “vtysh”

You can manipulate the kernel routing tables and create routes in “VRF”, it uses staticD and zebra if I remember correctly.

There may be easier ways to do it now in FRR but this is what I have done a few months ago in /etc/network/interfaces to create a VRF for my management interface (the more manual way).

### management - vlan193

allow-bridge0 vlan_193
iface vlan_193 inet static
  ovs_type OVSIntPort
  ovs_bridge bridge0
  ovs_options vlan_mode=access tag=193
  address 10.10.193.5
  netmask 255.255.255.0
  dns-nameservers 8.8.8.8 1.1.1.1
  post-up ip link add mgmt type vrf table 10
  post-up ip link set dev mgmt up
  post-up ip rule add iif mgmt table 10
  post-up ip rule add oif mgmt table 10
  post-up ip link set dev vlan_193 master mgmt
  post-up ip route add default via 10.10.193.1 table 10
 
### Bonded interfaces
 
auto enp2s0f0
iface enp2s0f0 inet manual
 
auto enp2s0f1
iface enp2s0f1 inet manual

## Bond and trunk specific vlans
 
allow-bridge0 bond0
iface bond0 inet manual
  ovs_bridge bridge0
  ovs_type OVSBond
  ovs_bonds enp2s0f0 enp2s0f1
  ovs_options bond_mode=balance-tcp lacp=active other-config:lacp-time=slow other_config:lacp-fallback-ab=true tag=1 vlan_mode=native-untagged trunks=1,80,88,193

### Bind all the l3 ports together in the ovs bridge
 
auto bridge0
allow-ovs bridge0
iface bridge0 inet manual
  ovs_type OVSBridge
  ovs_ports bond0 vlan_88 vlan_193
  up /etc/network/if-up.d/vlans
 
 
allow-bridge0 vlan_88
iface vlan_88 inet static
  ovs_type OVSIntPort
  ovs_bridge bridge0
  ovs_options vlan_mode=access tag=88
  address 10.10.88.10
  netmask 255.255.255.0
  gateway 10.10.88.1
  up ip route add 10.10.0.0/16 via 10.10.88.1


### Routed lxdbridge
 
 
auto lxdbr99
allow-ovs lxdbr99
iface lxdbr99 inet static
  ovs_type OVSBridge
  address 10.10.99.1
  netmask 255.255.255.0

root@m11:/home/jon# cat /etc/network/if-up.d/vlans

ovs-vsctl add-br lxdbr80 bridge0 80