Virtual/Cloud based network

I’d like to create a bridge/virtual network where specific containers on multiple hosts are all part of the same virtual subnet. Looks like Openvswitch is a good option for this I don’t suppose anyone has any examples or tutorials they would like to share and reduce my ramp up?.

Thanks in advance

2 Likes

Sounds like using OVN (https://www.ovn.org) from Openvswitch which LXD has recently added support for would be appropriate here.

It allows multiple virtual networks to be setup (across multiple machines) and manages the tunneling of traffic between machines (using geneve tunnels). It then uses OVS on each host to connect to the instances via veth pairs (like a normal native bridge does).

A simple single-node setup can be used to experiment with pretty quickly, we have a basic guide as part of the ovn network type docs here How to set up OVN with LXD - LXD documentation

Each network is given a virtual router, and this provides NAT, DNS relay to specified DNS servers, internal DNS, DHCP v4/v6 and IPv6 router advertisements.

With LXD ovn networks we have a concept of a uplink network using the network property, which is then used to provide external network access for the virtual router (with or without NAT), and we support using the existing lxdbr0 network as a parent network for single-node OVN networks (albeit it with a small modification to assign certain IPs from lxdbr0 for OVN use).

For larger deployments we also support assigning a physical NIC for OVN parent network usage using the physical network type Physical network - LXD documentation

OVN uses two specialised database services that all the compute nodes connect to. These are called the “northbound” database (which LXD uses to define the logical network config) and the “southbound” database (which OVN populates from the northbound database) to provide the openvswitch flow config for each compute node.

These databases can be run on a separate node/instance than the compute nodes, and can be independently clustered for HA.

Assuming a single database node, with multiple compute nodes a basic setup would be like this:

On the database node:

# Install central database services.
sudo apt install ovn-central -y

# Expose Northbound and Southbound databases onto network (note: no security).
ovn-nbctl set-connection ptcp:6643:0.0.0.0
ovn-sbctl set-connection ptcp:6642:0.0.0.0

On each compute node:

# Install OVN compute node services.
sudo apt install ovn-host -y

# Connect openvswitch to OVN southbound database.
sudo ovs-vsctl set open_vswitch . \
    external_ids:ovn-remote="tcp:<database node IP>:6642" \
    external_ids:ovn-encap-type=geneve \
    external_ids:ovn-encap-ip=<compute node local IP used for tunneling>

# Inform LXD where the northbound database is.
lxc config set network.ovn.northbound_connection=tcp:<database node IP>:6643

# Check the `br-int` OVN integration bridge exists.
sudo ovs-vsctl show

Then you can configure LXD OVN networks.

5 Likes

Tom thanks so much for taking the time to provide these details related to LXD and OVN.

If configuring for IPv6 are there any differences to know about?

For instance your example above uses IPv4

# Expose Northbound and Southbound databases onto network (note: no security).
ovn-nbctl set-connection ptcp:6643:0.0.0.0

thanks
brian mullan

I believe you can also specify IPv6 addresses (you may need to wrap them in square brackets, check the ovn-nbctl manpage).

As for the actual virtual networks, OVN will happily encapsulate IPv6 inside an IPv4 geneve tunnel, and LXD will detect the MTU of the interface associated to the ovn-encap-ip setting and will set the MTU of the virtual network appropriately so as to accommodate the overhead of the geneve tunnels (this is different depending on whether you use an IPv4 or IPv6 address for the ovn-encap-ip tunnel address).

Hey there.

Why should physical network type be used as a ovn network parent in larger deployments ? I don’t know much about the advantages of physical networks over bridged networks. Can you explain it? Thank you. :grinning:

Because LXD OVN networks in a LXD cluster expect that the uplink network (provided by either a bridge or a physical interface) are connected to the same L2 on each LXD server.

At the end of the day, OVN always uses an OVS bridge to connect to the uplink network.
If you use a physical interface as your uplink network then that is connected to a dynamically created OVS bridge anyway.

But what I was referring to in that statement is that “larger” deployments implies a LXD cluster, and in those cases using private managed bridges as the uplink will not work properly as it would mean that OVN’s uplink network was not connected to the same L2 on each LXD server in the cluster.

1 Like

You’ve made that clear. :+1: :+1: :+1:

1 Like