OVN HA Cluster - Uplink allocated NAT address not respond on all OVN created networks

Hi, I am new to LXD and OVN. I have deployed a 4 node cluster with Ceph and OVN to use as a lab for learning and testing. I have deploy OVN as per the instructions (https://linuxcontainers.org/lxd/docs/master/howto/network_ovn_setup/)

I have configured first node settings for `/etc/default/ovn-central’ as follows:

OVN_CTL_OPTS=" --db-nb-addr= --db-nb-create-insecure-remote=yes --db-sb-addr= --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr= --db-sb-cluster-local-addr= --ovn-northd-nb-db=tcp:,tcp:,tcp: --ovn-northd-sb-db=tcp:,tcp:,tcp:"ype or paste code here

The remaining 2 nodes in the OVN cluster have the following ‘/etc/default/ovn-central’ settings:

OVN_CTL_OPTS=" --db-nb-addr= --db-nb-cluster-remote-addr= --db-nb-create-insecure-remote=yes --db-sb-addr= --db-sb-cluster-remote-address= --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr= --db-sb-cluster-local-addr= --ovn-northd-nb-db=tcp:,tcp:,tcp: --ovn-northd-sb-db=tcp:,tcp:,tcp:"

I am trying to use the OVN routing functionality with NAT to allow out-bound internet access from each OVN network. I have created a number of OVN networks and I am able to attach containers and VM’s to these networks and ping between the containers and VM’s on that network. However, I am encountering an issue with the logical router side. When the OVN network is created an IP address is allocated from the pool as defined when creating the “UPLINK” network.

root@lxd-node05:/var/run/ovn# lxc network show ovn-net01
  bridge.mtu: "1442"
  ipv4.nat: "true"
  ipv6.address: fd42:bf79:e3ba:9532::1/64
  ipv6.nat: "true"
  network: lab-uplink-internet
description: ""
name: ovn-net01
type: ovn
used_by: []
managed: true
status: Created
- lxd-node01
- lxd-node02
- lxd-node03
- lxd-node05

I am unable to ping the address assigned ( from the default gateway for the subnet. However this is not the case on all the OVN networks created. One of the networks created does respond to pings on the assigned NAT address.

Checking OVN on the 3 cluster nodes I encounter the error when running the command ‘ovn-nbctl show’

root@lxd-node05:/var/run/ovn# ovn-nbctl show ovn-nbctl: unix:/var/run/ovn/ovnnb_db.sock: database connection failed ()
The error is not consistent on each node on the cluster. It moves between the 3 nodes in the OVN HA configuration. Two out of three nodes in the cluster will display the error. Checking the directory highlighted on each node the file referenced is created.

Any direction the community can provide would be greatly appreciated.