OVN high availability cluster tutorial

This tutorial describes how to setup a 3 node OVN and LXD high availability cluster.

I’m doing this with 3 LXD VMs connected to a private bridge with subnet 10.98.30.0/24, so lets create them first:

lxc init images:ubuntu/focal v1 --vm
lxc init images:ubuntu/focal v2 --vm
lxc init images:ubuntu/focal v3 --vm

We want to ensure they have statically assigned IPs:

lxc config device override v1 eth0 ipv4.address=10.98.30.2
lxc config device override v2 eth0 ipv4.address=10.98.30.3
lxc config device override v3 eth0 ipv4.address=10.98.30.4
lxc start v1 v2 v3

Install ovn-central in the first VM:

lxc shell v1
sudo apt install ovn-central -y

Update /etc/default/ovn-central with:

OVN_CTL_OPTS= \
  --db-nb-addr=10.98.30.2 \
  --db-sb-addr=10.98.30.2 \
  --db-nb-cluster-local-addr=10.98.30.2 \
  --db-sb-cluster-local-addr=10.98.30.2 \
  --db-nb-create-insecure-remote=yes \
  --db-sb-create-insecure-remote=yes \
  --ovn-northd-nb-db=tcp:10.98.30.2:6641,tcp:10.98.30.3:6641,tcp:10.98.30.4:6641 \
  --ovn-northd-sb-db=tcp:10.98.30.2:6642,tcp:10.98.30.3:6642,tcp:10.98.30.4:6642

Clear existing config, restart ovn-central and expose the DBs to the network:

rm -rvf /var/lib/ovn
systemctl restart ovn-central
ovn-nbctl show
exit

Now install ovn-central on the 2nd and 3rd VMs:

lxc shell v{n}
sudo apt install ovn-central -y

On each VM update /etc/default/ovn-central with the the .n parts changed to the VM’s IP and add the db-nb-cluster-remote-addr and db-sb-cluster-remote-addr settings in order to allow OVN to connect to the first VM:

OVN_CTL_OPTS= \
  --db-nb-addr=10.98.30.n \
  --db-sb-addr=10.98.30.n \
  --db-nb-cluster-local-addr=10.98.30.n \
  --db-sb-cluster-local-addr=10.98.30.n \
  --db-nb-create-insecure-remote=yes \
  --db-sb-create-insecure-remote=yes \
  --ovn-northd-nb-db=tcp:10.98.30.2:6641,tcp:10.98.30.3:6641,tcp:10.98.30.4:6641 \
  --ovn-northd-sb-db=tcp:10.98.30.2:6642,tcp:10.98.30.3:6642,tcp:10.98.30.4:6642 \
  --db-nb-cluster-remote-addr=10.98.30.2 \
  --db-sb-cluster-remote-addr=10.98.30.2  

Clear existing config, restart ovn-central:

rm -rvf /var/lib/ovn
systemctl restart ovn-central

Look in /var/log/ovn/ovn-northd.log for a line like This ovn-northd instance is now on standby.

On each VM check that the databases are working using (you should get empty output but no errors):

OVN_NB_DB=tcp:10.98.30.2:6641,tcp:10.98.30.3:6641,tcp:10.98.30.4:6641 ovn-nbctl show
OVN_SB_DB=tcp:10.98.30.2:6642,tcp:10.98.30.3:6642,tcp:10.98.30.4:6642 ovn-sbctl show

Now install ovn-host on each VM and configure OVS to connect to OVN:

sudo apt install ovn-host -y
sudo ovs-vsctl set open_vswitch . \
    external_ids:ovn-encap-type=geneve \
    external_ids:ovn-remote="unix:/var/run/ovn/ovnsb_db.sock" \
    external_ids:ovn-encap-ip=$(ip r get 10.98.30.1 | grep -v cache | awk '{print $5}')

Check that br-int interface exists in output of ip l on each VM.

Now install LXD on each VM and setup a LXD cluster as normal:

sudo apt install snapd -y
sudo snap install lxd
sudo lxd init

Once the LXD cluster is setup, inform LXD how to connect to OVN:

lxc config set network.ovn.northbound_connection=tcp:10.98.30.2:6641,tcp:10.98.30.3:6641,tcp:10.98.30.4:6641

Now lets create a bridged network for use as an OVN uplink network (you can also use a dedicated spare physical interface for uplinking onto an external network, see Networks | LXD):

lxc network create lxdbr0 --target=v1
lxc network create lxdbr0 --target=v2
lxc network create lxdbr0 --target=v3
lxc network create lxdbr0 \
	ipv4.address=10.179.176.1/24 \
    ipv4.nat=true \
	ipv4.dhcp.ranges=10.179.176.5-10.179.176.10 \ # Required to specify ipv4.ovn.ranges
	ipv4.ovn.ranges=10.179.176.11-10.179.176.20 # For use with OVN network's router IP on the uplink network

Now lets create an OVN network using the lxdbr0 bridge as an uplink:

lxc network create ovn0 --type=ovn network=lxdbr0

Finally lets check we can create an instance that connects to our OVN network:

lxc shell v3
lxc init images:ubuntu/focal c1
lxc config device add c1 eth0 nic network=ovn0
lxc start c1
lxc ls
+------+---------+-------------------+-----------------------------------------------+-----------+-----------+----------+
| NAME |  STATE  |       IPV4        |                     IPV6                      |   TYPE    | SNAPSHOTS | LOCATION |
+------+---------+-------------------+-----------------------------------------------+-----------+-----------+----------+
| c1   | RUNNING | 10.34.84.2 (eth0) | fd42:a21f:aa90:7cd7:216:3eff:fe7d:3b8a (eth0) | CONTAINER | 0         | v1       |
+------+---------+-------------------+-----------------------------------------------+-----------+-----------+----------+
lxc exec c1 -- ping 8.8.8.8

For more info about OVN network settings see:

https://linuxcontainers.org/lxd/docs/master/networks#network-ovn

9 Likes

What is the relationship between this clustering style and fan networking? If I recall correctly, there is a prompt in lxd init regarding setting up a fan network overlay.

Fan is the only clustering method I have used, but I’m very interested in this one.

The fan networking is certainly the easiest way to get networking working across a cluster. The main differences between fan and OVN are:

  • fan subnets are host-specific, moving an instance between servers in a cluster will lead to a different IP address. With OVN you get virtual L2 networks across your cluster so can move things around and not change addresses.
  • fan networks are system-wide and must not overlap, so you can’t delegate their creation to untrusted/restricted users of your cluster and running multiple fan networks on the same cluster requires you managing non-conflicting underlay subnets. With OVN, your networks never show up on the host system so you can reuse the same subnet many times if you feel like without ever getting a conflict. The underlay is a set of auto-generated geneve tunnels between your servers so no need to think about underlay subnets. This means untrusted/restricted users can create their own networks in their own project without being able to impact anyone else.

Another difference is that OVN allows for distributed firewalling (through flow rules) which integrates with LXD’s new network ACL feature. This allows very fine grained firewalling even within a network including label based source/destination rules so you don’t need to hardcode addresses everywhere. Traditional Linux networking (including the fan overlay) are quite a bit more limited in that regard and can get very confusing when dealing with cross-host traffic.

The obvious downside is that OVN requires you to have ovn and openvswitch installed and configured on your cluster nodes, similar in a way to what we have on the storage side with ceph. But once it’s in place, it’s very flexible and we have a number of extra features coming soon which will make it an even better option for many users.

6 Likes

Nice post!
How to access instances from an external network?
Example: If I have an instance running httpd service, how to expose 80/443 ports to external?

If you are using internal network subnets (i.e not routable from the uplink network), then you can use network forwards:

https://linuxcontainers.org/lxd/docs/master/network-forwards/

Take a look at https://github.com/lxc/lxc-ci/blob/master/bin/test-lxd-network-ovn#L568-L755 for usage examples.

1 Like

Thanks, Thomas. :+1:

Hi @tomp ,

thanks for the quick tutorial. I just got it running on my three nodes. OVN is clustered as well es the gateway and it seems to work as intended.

However I have a gap in understanding the connection to the outside world. How does LXD or OVN now how to route traffic to the outside world from the lxdbr0 interface over my physical network eth0?

Thanks in advance!

The LXD host will use its routing table to use the default route, and by default will SNAT outbound traffic to the LXD host’s IP on the external network, so outbound traffic will appear to come from the LXD host’s IP.

Alright thank you!
I still have problems on getting network forward to work but maybe its better to create a seperate topic for that.

Hi @tomp
Does this OVN High Availability Cluster have to be created in a VM?
can I create OVN High Availability Cluster in Host Server?

It can be created in either.

Hi @tomp
I followed the tutorial and managed to get it up running on my machine.
However trying to replicated the same setup on ec2 instances instead of lxd vm’s it doesn’t work. I got to the point where i have networks configured and instances running on separate lxd cluster nodes, but they can’t ping each other.
I’m also unable to reach to the internet from the instances.
Any help with debugging it?

Please can you create a new post showing your configuration steps you’ve used and I’ll try to help. Thanks

Ha, I repeated the process so i could record all steps and this time it worked :slight_smile:

Still I have few question regarding ipv4 ranges.
What is the purpose of ipv4.dhcp.ranges?
Can I just use following config and be able to have over 6k separate subnets for my instances?

lxc network create lxdbr3 \
  ipv4.address=10.0.0.1/16 \
  ipv4.nat=true \
  ipv4.dhcp.ranges=10.0.0.2-10.0.0.2 \
  ipv4.ovn.ranges=10.0.0.3-10.0.255.255

The ipv4.ovn.ranges allocates which IPs will be used by OVN networks virtual router’s external port on the uplink network. Yes it effectively limits the number of OVN networks you can create connected to that uplink network.

Note however it has no relation to the subnet used inside the OVN networks (you may have multiple OVN networks that use the same internal subnet and still be isolated from each other). External traffic coming from the OVN network onto the uplink network is SNATed to the IP allocated to that OVN network.

You can see which uplink IP has been allocated to an OVN network using:

lxc network get <ovn network> volatile.network.ipv4.address

This purpose is restrict which IPs are given out to instances connected to the uplink network (not OVN) to ensure they do not overlap with IPs being used by OVN network’s external router ports.

That sounds great! Doesn’t seem to work tho :slight_smile: at least not automatically in that case?

ubuntu@ip-172-31-3-38:~$ for n in $(lxc cluster list --format=csv | cut -f1 -d,); do \
  lxc network create lxdbr0 --target=$n; \
done
Network lxdbr0 pending on member ip-172-31-3-38
Network lxdbr0 pending on member ip-172-31-4-95
Network lxdbr0 pending on member ip-172-31-7-158
ubuntu@ip-172-31-3-38:~$ lxc network create lxdbr0 \
  ipv4.address=10.0.0.1/8 \
  ipv4.nat=true \
  ipv4.dhcp.ranges=10.0.0.2-10.0.0.2 \
  ipv4.ovn.ranges=10.0.0.3-10.255.255.254 
Network lxdbr0 created
ubuntu@ip-172-31-3-38:~$ lxc network create ovn0 --type=ovn network=lxdbr0
Error: Failed generating auto config: Failed to automatically find an unused IPv4 subnet, manual configuration required

That error isn’t from LXD trying to pick an external IP for the router on the uplink network, its for selecting a a subnet for the internal OVN network.

Although the subnet is isolated by default it still tries to pick an apparently unused subnet (in case you ever want to disable SNAT on that network and route into it from the uplink network).

The function that does this is:

It tries to pick a /24 in the 10.0.0.0/8 block, so as you’ve allocated that entire block to the uplink network, it means you cannot use the auto subnet generator here.

@stgraber is there a historical reason why randomSubnetV4() only picks from the 10.0.0.0/8 subnet?

Is there a particular reason you’ve made the uplink network so large?