OVN - ovs-vsctl and open_vswitch using wrong IP address

Hello,

I am setting up an Incus cluster on OVN. When I run tail -f /var/log/ovn/ovsdb-server-nb.log on each node, I see:

2024-06-29T17:41:29.871Z|01271|socket_util|ERR|6643:xx.xx.xx.xx: bind: Cannot assign requested address

This made me realize it’s still using an old IP address when I was configuring Open vSwitch. I reran the command:
sudo ovs-vsctl set open_vswitch . \ external_ids:ovn-remote=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642 \ external_ids:ovn-encap-type=geneve \ external_ids:ovn-encap-ip=<local> using the correct IP addresses, but I still get the above error. How can I remove the old IP addresses? Thank you very much.

Also, as a side question, can someone please clarify what we should use for ipv4.ovn.ranges=<IP_range> and ipv6.ovn.ranges=<IP_range> ?

Documentation says to: “Use suitable IP ranges based on the assigned IPs.” But what does that mean, exactly? Do I use the range of IPs available from my DHCP server?

When I run the following commands:

ovs-appctl -t /run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound

and

ovs-appctl -t /run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound

I also see references to old IP addresses that are no longer in use following setting up unmanaged bridges. It seems I need to run something such as these, but I’m struggling with the syntax:

ovn-nbctl del-connection ptcp:6641:x.x.x.x
ovn-sbctl del-connection ptcp:6642:x.x.x.x

Am I getting any closer?

@stgraber, any guidance on this? :slight_smile:

@stgraber , I just went ahead and started from a clean slate. But I am getting another error when I configure Incus node 1 to be able to communicate with the OVN DB cluster.

I ran this and get an error on node 1. I also provided logs. Any ideas?

Node 1

root@rpicluster01:~# incus config set network.ovn.northbound_connection tcp:10.0.1.179:6641,tcp:10.0.1.180:6641,tcp:10.0.1.173:6641
Error: failed to notify peer 10.0.1.180:8443: Failed to connect to OVS: failed to connect to unix:///run/openvswitch/db.sock: listdbs failure - unexpected EOF
root@rpicluster01:~# tail -f /var/log/ovn/ovsdb-server-sb.log
2024-07-02T19:26:21.320Z|00006|reconnect|INFO|tcp:10.0.1.173:6644: connected
2024-07-02T19:26:21.320Z|00007|reconnect|INFO|tcp:10.0.1.180:6644: connected
2024-07-02T19:26:21.511Z|00008|raft|INFO|server cca2 is leader for term 2
2024-07-02T19:26:21.511Z|00009|raft|INFO|rejecting append_request because previous entry 2,13 not in local log (mismatch past end of log)
2024-07-02T19:26:22.633Z|00010|raft|INFO|tcp:10.0.1.180:43382: learned server ID 8fed
2024-07-02T19:26:22.633Z|00011|raft|INFO|tcp:10.0.1.180:43382: learned remote address tcp:10.0.1.180:6644
2024-07-02T19:26:30.130Z|00012|raft|INFO|tcp:10.0.1.173:40534: learned server ID cca2
2024-07-02T19:26:30.130Z|00013|raft|INFO|tcp:10.0.1.173:40534: learned remote address tcp:10.0.1.173:6644
2024-07-02T19:26:32.247Z|00014|memory|INFO|8064 kB peak resident set size after 10.0 seconds
2024-07-02T19:26:32.247Z|00015|memory|INFO|atoms:606 cells:413 monitors:0 n-weak-refs:14 raft-connections:4 raft-log:12 txn-history:8 txn-history-atoms:472
^C
root@rpicluster01:~# tail -f /var/log/ovn/ovsdb-server-nb.log
2024-07-02T19:26:21.307Z|00006|reconnect|INFO|tcp:10.0.1.173:6643: connected
2024-07-02T19:26:21.307Z|00007|reconnect|INFO|tcp:10.0.1.180:6643: connected
2024-07-02T19:26:21.384Z|00008|raft|INFO|server 6175 is leader for term 3
2024-07-02T19:26:21.384Z|00009|raft|INFO|rejecting append_request because previous entry 3,5 not in local log (mismatch past end of log)
2024-07-02T19:26:22.633Z|00010|raft|INFO|tcp:10.0.1.180:41168: learned server ID 6175
2024-07-02T19:26:22.633Z|00011|raft|INFO|tcp:10.0.1.180:41168: learned remote address tcp:10.0.1.180:6643
2024-07-02T19:26:30.124Z|00012|raft|INFO|tcp:10.0.1.173:60044: learned server ID 2cd5
2024-07-02T19:26:30.124Z|00013|raft|INFO|tcp:10.0.1.173:60044: learned remote address tcp:10.0.1.173:6643
2024-07-02T19:26:32.233Z|00014|memory|INFO|8064 kB peak resident set size after 10.0 seconds
2024-07-02T19:26:32.233Z|00015|memory|INFO|atoms:35 cells:34 monitors:0 n-weak-refs:0 raft-connections:4 raft-log:4 txn-history:1 txn-history-atoms:18

Node 2

root@rpicluster02:~# tail -f /var/log/ovn/ovsdb-server-sb.log
2024-07-02T19:26:12.297Z|00031|reconnect|INFO|tcp:10.0.1.173:6644: connecting...
2024-07-02T19:26:12.298Z|00032|reconnect|INFO|tcp:10.0.1.173:6644: connected
2024-07-02T19:26:14.164Z|00033|memory|INFO|13624 kB peak resident set size after 10.0 seconds
2024-07-02T19:26:14.165Z|00034|memory|INFO|atoms:547 cells:390 monitors:0 n-weak-refs:13 raft-connections:3 raft-log:11 txn-history:7 txn-history-atoms:413
2024-07-02T19:26:15.164Z|00035|reconnect|INFO|tcp:10.0.1.179:6644: connecting...
2024-07-02T19:26:15.559Z|00036|reconnect|INFO|tcp:10.0.1.179:6644: connection attempt failed (No route to host)
2024-07-02T19:26:15.559Z|00037|reconnect|INFO|tcp:10.0.1.179:6644: continuing to reconnect in the background but suppressing further logging
2024-07-02T19:26:22.248Z|00038|raft|INFO|tcp:10.0.1.179:33598: learned server ID 56a8
2024-07-02T19:26:22.248Z|00039|raft|INFO|tcp:10.0.1.179:33598: learned remote address tcp:10.0.1.179:6644
2024-07-02T19:26:23.560Z|00040|reconnect|INFO|tcp:10.0.1.179:6644: connected
^C
root@rpicluster02:~# tail -f /var/log/ovn/ovsdb-server-nb.log
2024-07-02T19:26:12.297Z|00029|reconnect|INFO|tcp:10.0.1.173:6643: connecting...
2024-07-02T19:26:12.298Z|00030|reconnect|INFO|tcp:10.0.1.173:6643: connected
2024-07-02T19:26:14.162Z|00031|memory|INFO|13624 kB peak resident set size after 10.0 seconds
2024-07-02T19:26:14.162Z|00032|memory|INFO|atoms:35 cells:34 monitors:0 n-weak-refs:0 raft-connections:3 raft-log:4 txn-history:1 txn-history-atoms:18
2024-07-02T19:26:15.163Z|00033|reconnect|INFO|tcp:10.0.1.179:6643: connecting...
2024-07-02T19:26:15.559Z|00034|reconnect|INFO|tcp:10.0.1.179:6643: connection attempt failed (No route to host)
2024-07-02T19:26:15.559Z|00035|reconnect|INFO|tcp:10.0.1.179:6643: continuing to reconnect in the background but suppressing further logging
2024-07-02T19:26:22.234Z|00036|raft|INFO|tcp:10.0.1.179:36744: learned server ID fe51
2024-07-02T19:26:22.234Z|00037|raft|INFO|tcp:10.0.1.179:36744: learned remote address tcp:10.0.1.179:6643
2024-07-02T19:26:23.560Z|00038|reconnect|INFO|tcp:10.0.1.179:6643: connected
^C

root@rpicluster01:~# ovs-vsctl show
b2c18607-68fe-4c1e-89dd-5f592330e32b
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port br-int
            Interface br-int
                type: internal
        Port ovn-304065-0
            Interface ovn-304065-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.170"}
        Port ovn-a4242a-0
            Interface ovn-a4242a-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.180"}
        Port ovn-3dbed4-0
            Interface ovn-3dbed4-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.173"}
    ovs_version: "3.3.0"
root@rpicluster02:~# ovs-vsctl show
08ddcb82-1ca7-4280-92ef-c72766e78fe9
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port ovn-3dbed4-0
            Interface ovn-3dbed4-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.173"}
        Port br-int
            Interface br-int
                type: internal
        Port ovn-91e2fd-0
            Interface ovn-91e2fd-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.179"}
        Port ovn-304065-0
            Interface ovn-304065-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.170"}
    ovs_version: "3.3.0"

That would most likely be a complaint about the addresses from /etc/default/ovn-central that the address configured in OVS or Incus (though those also need fixing).

Note that OVN runs a clustered database and that database itself contains the list of other servers and IP addresses, so just replacing IP addresses everywhere will not fix things in a clustered environment.

You’ll want to look at the OVN ovn-appctl command as a way to add/remove members of the OVN cluster to switch from one set of IP to another. If you can’t run with both set of IPs concurrently and your OVN cluster is broken. You’ll need to instead resort to ovsdb-tool to convert the initial server’s databases from clustered back to standalone, then wipe the database on all other servers and restart ovn-central so they attempt a clean re-clustering of the database.

Both of those indicate a range of IP address that Incus can freely use for OVN virtual routers.
Those IP addresses should never be used by other systems on your network or be part of a DHCP pool.

Yes. Since you have a functional appctl, you could indeed reconfigure the connection string with ovn-nbctl set-connection pssl:6641:[::] and ovn-nbctl set-connection pssl:6642:[::] to avoid having hardcoded addresses there.

But that’s unlikely to be the problem. The most likely problem is the list of Servers in the cluster/status output which will be showing the wrong addresses.

That’s where you’d need to use cluster/kick to remove the other servers and then have them join back.

It can get tricky with your own address though which is why I often find myself having to instead use ovsdb-tool and the cluster-to-standalone command to turn the DB back into a standalone database, then use create-cluster to create a new cluster database with the correct address, put that in place and then restart the other servers so they rejoin the database.

Then finally make sure cluster/status shows all your servers as expected.

That’s odd. I’d start with restarting Incus on all servers with systemctl restart incus to make sure they’re dealing with the correct DB string.

Then I’d use ovn-appctl against both the NB and SB databases with the cluster/status command to make sure you see all your servers in there and that there’s no weird issue where one failed to join.

Thanks for the additional guidance. So when I ran cluster/status against both nb and sb, I see this:

root@rpicluster01:~# ovn-appctl -t nb cluster/status
2024-07-13T21:48:29Z|00001|daemon_unix|WARN|/var/run/ovn/nb.pid: open: No such file or directory
ovn-appctl: cannot read pidfile "/var/run/ovn/nb.pid" (No such file or directory)
root@rpicluster01:~# ovn-appctl -t sb cluster/status
2024-07-13T21:48:57Z|00001|daemon_unix|WARN|/var/run/ovn/sb.pid: open: No such file or directory
ovn-appctl: cannot read pidfile "/var/run/ovn/sb.pid" (No such file or directory)

I then double checked systemctl status ovn-northd to make sure it’s running, and confirmed it is. I also ran a query shown below, and am unable to find nb.pid nor sb.pid.

root@rpicluster01:~# ls /var/run/ovn
ovn-controller.1348.ctl  ovn-northd.pid  ovnnb_db.sock  ovnsb_db.sock
ovn-controller.pid       ovnnb_db.ctl    ovnsb_db.ctl
ovn-northd.1444.ctl      ovnnb_db.pid    ovnsb_db.pid

Any ideas on why they are missing?

Try using ovnnb_db and ovnsb_db as DB names.

Thank you. So tried doing that and I get another error:

root@rpicluster01:~# ovn-appctl -t ovnnb_db cluster/status
2024-07-15T18:57:13Z|00001|unixctl|WARN|failed to connect to /var/run/ovn/ovnnb_db.1353.ctl
ovn-appctl: cannot connect to "/var/run/ovn/ovnnb_db.1353.ctl" (No such file or directory)

root@rpicluster01:~# ovn-appctl -t ovnsb_db cluster/status
2024-07-15T19:02:35Z|00001|unixctl|WARN|failed to connect to /var/run/ovn/ovnsb_db.1343.ctl
ovn-appctl: cannot connect to "/var/run/ovn/ovnsb_db.1343.ctl" (No such file or directory)

When I check that directory again, I see this:

root@rpicluster01:~# ls -ail /var/run/ovn
total 16
2882 drwxr-xr-x  2 root root 240 Jul 15 11:39 .
   1 drwxr-xr-x 33 root root 960 Jul 15 11:52 ..
2884 srwxr-x---  1 root root   0 Jul 15 11:39 ovn-controller.1299.ctl
2883 -rw-r--r--  1 root root   5 Jul 15 11:39 ovn-controller.pid
2892 srwxr-x---  1 root root   0 Jul 15 11:39 ovn-northd.1358.ctl
2891 -rw-r--r--  1 root root   5 Jul 15 11:39 ovn-northd.pid
2890 srwxr-x---  1 root root   0 Jul 15 11:39 ovnnb_db.ctl
2886 -rw-r--r--  1 root root   5 Jul 15 11:39 ovnnb_db.pid
2889 srwxr-x---  1 root root   0 Jul 15 11:39 ovnnb_db.sock
2888 srwxr-x---  1 root root   0 Jul 15 11:39 ovnsb_db.ctl
2885 -rw-r--r--  1 root root   5 Jul 15 11:39 ovnsb_db.pid
2887 srwxr-x---  1 root root   0 Jul 15 11:39 ovnsb_db.sock

When I run ps | aux, looking for the nb and sb processes, I see this:

root@rpicluster01:~# ps aux | grep ovnnb_db
root        1353  0.1  0.0 158684  8064 ?        Rl   11:39   0:02 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-nb.log --remote=punix:/var/run/ovn/ovnnb_db.sock --pidfile=/var/run/ovn/ovnnb_db.pid --unixctl=/var/run/ovn/ovnnb_db.ctl --remote=db:OVN_Northbound,NB_Global,connections --private-key=db:OVN_Northbound,SSL,private_key --certificate=db:OVN_Northbound,SSL,certificate --ca-cert=db:OVN_Northbound,SSL,ca_cert --ssl-protocols=db:OVN_Northbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Northbound,SSL,ssl_ciphers --remote=ptcp:6641:10.0.1.179 /var/lib/ovn/ovnnb_db.db
root@rpicluster01:~# ps aux | grep ovnsb_db
root        1343  0.1  0.1 158948  8192 ?        Sl   11:39   0:01 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-sb.log --remote=punix:/var/run/ovn/ovnsb_db.sock --pidfile=/var/run/ovn/ovnsb_db.pid --unixctl=/var/run/ovn/ovnsb_db.ctl --remote=db:OVN_Southbound,SB_Global,connections --private-key=db:OVN_Southbound,SSL,private_key --certificate=db:OVN_Southbound,SSL,certificate --ca-cert=db:OVN_Southbound,SSL,ca_cert --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers --remote=ptcp:6642:10.0.1.179 /var/lib/ovn/ovnsb_db.db
root        1963  0.0  0.0   3688  1792 pts/1    S+   12:00   0:00 grep --color=auto ovnsb_db

Sorry to keep asking for help on this. I’m really scratching my head on this. I also tried restarting OVN services. That didn’t help either. Checking systemctl, everything appears fine:

root@rpicluster01:~# systemctl status ovn-northd
● ovn-northd.service - Open Virtual Network central control daemon
     Loaded: loaded (/usr/lib/systemd/system/ovn-northd.service; static)
     Active: active (running) since Mon 2024-07-15 11:39:55 PDT; 26min ago
    Process: 1082 ExecStart=/usr/share/ovn/scripts/ovn-ctl start_northd --ovn-manage-ovsdb=no --no-monitor $OVN_CTL_OPTS (code=exited, status=0/S>
   Main PID: 1358 (ovn-northd)
      Tasks: 3 (limit: 9078)
     Memory: 3.2M (peak: 3.7M)
        CPU: 305ms
     CGroup: /system.slice/ovn-northd.service
             └─1358 ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=tcp:10.0.1.179:6641,tcp:10.0.1.180:6641,tcp:10.0.1.173:6641 --o>

Jul 15 11:39:55 rpicluster01 systemd[1]: Starting ovn-northd.service - Open Virtual Network central control daemon...
Jul 15 11:39:55 rpicluster01 ovn-ctl[1082]:  * Starting ovn-northd
Jul 15 11:39:55 rpicluster01 systemd[1]: Started ovn-northd.service - Open Virtual Network central control daemon.

Why would it specifically expect ovnnb_db.1353.ctl and ovnsb_db.1343.ctl?

root@abydos:~# ovn-appctl -t /run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound
6498
Name: OVN_Northbound
Cluster ID: 6bc5 (6bc5323f-27cc-42fd-b71d-6b94f5f6b003)
Server ID: 6498 (64987e11-1224-440a-b42f-8764d8b62fd4)
Address: tcp:[2602:fc62:a:101::100]:6643
Status: cluster member
Role: leader
Term: 7228
Leader: self
Vote: self

Last Election started 56625468 ms ago, reason: leadership_transfer
Last Election won: 56625449 ms ago
Election timer: 1000
Log: [489554, 490679]
Entries not yet committed: 0
Entries not yet applied: 0
Connections: ->7a93 ->3de6 <-3de6 <-7a93
Disconnections: 2
Servers:
    6498 (6498 at tcp:[2602:fc62:a:101::100]:6643) (self) next_index=490099 match_index=490678
    7a93 (7a93 at tcp:[2602:fc62:a:101::102]:6643) next_index=490679 match_index=490678 last msg 329 ms ago
    3de6 (3de6 at tcp:[2602:fc62:a:101::101]:6643) next_index=490679 match_index=490678 last msg 329 ms ago
root@abydos:~# ovn-appctl -t /run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound
d9bc
Name: OVN_Southbound
Cluster ID: e5ca (e5ca6f62-021c-4704-9fec-73d2656ef9f7)
Server ID: d9bc (d9bc44b6-a0a5-4a34-b5f5-f4e4d8d63c5d)
Address: tcp:[2602:fc62:a:101::100]:6644
Status: cluster member
Role: follower
Term: 7418
Leader: fb78
Vote: fb78

Election timer: 1000
Log: [1604346, 1605526]
Entries not yet committed: 0
Entries not yet applied: 0
Connections: ->fb78 ->cd14 <-fb78 <-cd14
Disconnections: 2
Servers:
    fb78 (fb78 at tcp:[2602:fc62:a:101::101]:6644) last msg 148 ms ago
    cd14 (cd14 at tcp:[2602:fc62:a:101::102]:6644) last msg 54891146 ms ago
    d9bc (d9bc at tcp:[2602:fc62:a:101::100]:6644) (self)
root@abydos:~# 

Thanks for that. I was able to run commands to see list of servers as you suggest, and everything seems in order at first glance. I see all my servers in there and there’s no weird issue where one failed to join. But I still get same error following this, as shown further below.

root@rpicluster01:~# ovn-appctl -t /run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound
fe51
Name: OVN_Northbound
Cluster ID: f7fd (f7fd57bb-a6a7-4f16-b6b2-097e04e44689)
Server ID: fe51 (fe51b1d5-0ff4-41ec-b785-41468cd6ef86)
Address: tcp:10.0.1.179:6643
Status: cluster member
Role: follower
Term: 19
Leader: 6175
Vote: unknown

Election timer: 1000
Log: [2, 18]
Entries not yet committed: 0
Entries not yet applied: 0
Connections: ->2cd5 ->6175 <-2cd5 <-6175
Disconnections: 0
Servers:
    2cd5 (2cd5 at tcp:10.0.1.173:6643) last msg 271878 ms ago
    fe51 (fe51 at tcp:10.0.1.179:6643) (self)
    6175 (6175 at tcp:10.0.1.180:6643) last msg 242 ms ago

root@rpicluster01:~# ovn-appctl -t /run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound
56a8
Name: OVN_Southbound
Cluster ID: 41d1 (41d10873-c7b0-44bd-96b0-db21a734c868)
Server ID: 56a8 (56a8b5d1-7475-43c8-903d-c9afa46f730a)
Address: tcp:10.0.1.179:6644
Status: cluster member
Role: follower
Term: 15
Leader: cca2
Vote: unknown

Last Election started 360943 ms ago, reason: timeout
Election timer: 1000
Log: [2, 49]
Entries not yet committed: 0
Entries not yet applied: 0
Connections: ->cca2 ->8fed <-cca2 <-8fed
Disconnections: 0
Servers:
    56a8 (56a8 at tcp:10.0.1.179:6644) (self)
    cca2 (cca2 at tcp:10.0.1.173:6644) last msg 193 ms ago
    8fed (8fed at tcp:10.0.1.180:6644) last msg 361479 ms ago

Anything else come to mind on why I still get this error?

root@rpicluster01:~# incus config set network.ovn.northbound_connection tcp:10.0.1.179:6641,tcp:10.0.1.180:6641,tcp:10.0.1.173:6641
Error: failed to notify peer 10.0.1.180:8443: Failed to connect to OVS: failed to connect to unix:///run/openvswitch/db.sock: listdbs failure - unexpected EOF

There seems to be an issue going on with OpenVswitch on the system at 10.0.1.180.

Basically Incus on that system attempted to reconfigure OVS and that failed.
So that’s not likely to be related to your OVN databases/daemon but to OVS instead.

Thanks, @stgraber. What do you suggest I do? And what tooling are you referring to? Latest OvS I see online is v3.3.1, and latest OVN I see is 24.03.

Try to do ovs-vsctl show on all servers. especially on the one that’s reporting an error.

Nothing seems out of the ordinary to me on this…

root@rpicluster01:~# ovs-vsctl show
b2c18607-68fe-4c1e-89dd-5f592330e32b
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port br-int
            Interface br-int
                type: internal
        Port ovn-3dbed4-0
            Interface ovn-3dbed4-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.173"}
        Port ovn-a4242a-0
            Interface ovn-a4242a-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.180"}
        Port ovn-304065-0
            Interface ovn-304065-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.170"}
    ovs_version: "3.3.0"

root@rpicluster02:~# ovs-vsctl show 
08ddcb82-1ca7-4280-92ef-c72766e78fe9
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port ovn-304065-0
            Interface ovn-304065-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.170"}
        Port br-int
            Interface br-int
                type: internal
        Port ovn-3dbed4-0
            Interface ovn-3dbed4-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.173"}
        Port ovn-91e2fd-0
            Interface ovn-91e2fd-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.179"}
    ovs_version: "3.3.0"
root@rpicluster03:~# ovs-vsctl show 
3988f4c5-c232-4a76-9045-702daa8dbdd1
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port ovn-91e2fd-0
            Interface ovn-91e2fd-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.179"}
        Port ovn-a4242a-0
            Interface ovn-a4242a-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.180"}
        Port br-int
            Interface br-int
                type: internal
        Port ovn-304065-0
            Interface ovn-304065-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.170"}
    ovs_version: "3.3.0"

root@rpicluster04:~# ovs-vsctl show 
c1e85aa8-f5b6-4116-9c3c-280fc58f037a
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port br-int
            Interface br-int
                type: internal
        Port ovn-a4242a-0
            Interface ovn-a4242a-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.180"}
        Port ovn-91e2fd-0
            Interface ovn-91e2fd-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.179"}
        Port ovn-3dbed4-0
            Interface ovn-3dbed4-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.0.1.173"}
    ovs_version: "3.3.0"

Check that ovs-vsctl get Open_vSwitch . external_ids:ovn-remote returns the correct set of southbound DB addresses on all systems.

Yes, they do. IPs shown below are node1, node2, and node3, respectively.

root@rpicluster01:~# ovs-vsctl get Open_vSwitch . external_ids:ovn-remote
"tcp:10.0.1.179:6642,tcp:10.0.1.180:6642,tcp:10.0.1.173:6642"