Created a LXD cluster with OVN but I've not connectivity to internet nor to containers

I’ve created a LXD cluster using OVN from this document.
The cluster was build on CentOS 7 VMWare VMs with two physical interfaces each (eth0 and eth1).
On each VM, it was created a bridge on eth1 physical interface and added an IP address to the bridge:

[qa1lxcluster01]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:e0:74 brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.151/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:e074/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:50:56:91:87:82 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe91:8782/64 scope link 
       valid_lft forever preferred_lft forever
4: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:50:56:91:87:82 brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.151/24 brd 10.201.72.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:8782/64 scope link 
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:1d:28:55:76:30 brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000
    link/ether a2:d6:60:c1:3a:d4 brd ff:ff:ff:ff:ff:ff
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 4e:bb:8a:ce:18:a1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4cbb:8aff:fece:18a1/64 scope link 
       valid_lft forever preferred_lft forever
8: lxdovn1b@lxdovn1a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 0a:8e:ac:de:ee:26 brd ff:ff:ff:ff:ff:ff
9: lxdovn1a@lxdovn1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether 72:36:9b:83:6f:3a brd ff:ff:ff:ff:ff:ff
10: lxdovn1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 52:4f:d7:87:a7:4d brd ff:ff:ff:ff:ff:ff
12: veth636ebb62@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether fa:3b:ef:5e:10:ad brd ff:ff:ff:ff:ff:ff link-netnsid 0
[qa1lxcluster02]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:dc:93 brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.152/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:dc93/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:50:56:91:12:f1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe91:12f1/64 scope link 
       valid_lft forever preferred_lft forever
4: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:50:56:91:12:f1 brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.152/24 brd 10.201.72.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:12f1/64 scope link 
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:fc:f8:46:fa:5d brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000
    link/ether 9a:ac:d6:7c:19:9f brd ff:ff:ff:ff:ff:ff
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether a6:fc:ae:b9:60:91 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a4fc:aeff:feb9:6091/64 scope link 
       valid_lft forever preferred_lft forever
8: lxdovn1b@lxdovn1a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 6a:52:0f:6c:45:28 brd ff:ff:ff:ff:ff:ff
9: lxdovn1a@lxdovn1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether f6:60:0f:cc:62:9d brd ff:ff:ff:ff:ff:ff
10: lxdovn1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 36:b6:a5:88:26:4f brd ff:ff:ff:ff:ff:ff
12: vethee845f7e@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 5a:be:d0:61:7d:14 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[qa1lxcluster03]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:70:1e brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.153/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:701e/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:50:56:91:1b:75 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe91:1b75/64 scope link 
       valid_lft forever preferred_lft forever
4: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:50:56:91:1b:75 brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.153/24 brd 10.201.72.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:1b75/64 scope link 
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 96:86:77:c8:3f:5f brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c2:8f:09:a4:38:62 brd ff:ff:ff:ff:ff:ff
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether ee:56:ae:b5:be:26 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec56:aeff:feb5:be26/64 scope link 
       valid_lft forever preferred_lft forever
8: lxdovn1b@lxdovn1a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 7e:be:6e:63:28:8f brd ff:ff:ff:ff:ff:ff
9: lxdovn1a@lxdovn1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether 5a:bc:3a:b6:9e:83 brd ff:ff:ff:ff:ff:ff
10: lxdovn1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 82:9a:7a:46:89:42 brd ff:ff:ff:ff:ff:ff

```bash
[qa1lxcluster04]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:f9:ac brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.154/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:f9ac/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:50:56:91:14:ed brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe91:14ed/64 scope link 
       valid_lft forever preferred_lft forever
4: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:50:56:91:14:ed brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.154/24 brd 10.201.72.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:14ed/64 scope link 
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e6:6f:e0:70:61:c4 brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f2:21:c2:13:ad:2b brd ff:ff:ff:ff:ff:ff
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether c2:6a:15:67:83:dc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c06a:15ff:fe67:83dc/64 scope link 
       valid_lft forever preferred_lft forever
8: lxdovn1b@lxdovn1a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 9e:a4:20:f6:0b:e6 brd ff:ff:ff:ff:ff:ff
9: lxdovn1a@lxdovn1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether e6:08:54:ad:71:82 brd ff:ff:ff:ff:ff:ff
10: lxdovn1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8e:44:27:48:d2:4a brd ff:ff:ff:ff:ff:ff

These are Northbound DB configuration:

QA1 qa1lxcluster04 ~# ovn-nbctl show
switch b073fab1-ba86-49c1-bada-bd28f48f824a (lxd-net2-ls-ext)
    port lxd-net2-ls-ext-lsp-provider
        type: localnet
        addresses: ["unknown"]
    port lxd-net2-ls-ext-lsp-router
        type: router
        router-port: lxd-net2-lr-lrp-ext
switch 8c1a9c7b-e820-4fe1-8cad-5929588e4a30 (lxd-net2-ls-int)
    port lxd-net2-instance-2d331122-d6d1-4630-b714-3ce3481f0259-eth0
        addresses: ["00:16:3e:ea:52:a7 dynamic"]
    port lxd-net2-ls-int-lsp-router
        type: router
        router-port: lxd-net2-lr-lrp-int
    port lxd-net2-instance-9c1fff4e-ee27-4d81-a033-1bc3448f33f3-eth0
        addresses: ["00:16:3e:bb:7b:a9 dynamic"]
router cd038c7e-3c25-4ed3-a2fe-b58897ed500a (lxd-net2-lr)
    port lxd-net2-lr-lrp-ext
        mac: "00:16:3e:92:07:9f"
        networks: ["10.201.72.230/24"]
        gateway chassis: [25ff4760-b144-4236-9746-88d874118dd3 776ab8a8-89ab-428e-bc7d-e58a80fc2741 655bfb5e-f4e2-4b3c-9e23-a37c7c664805 90807679-74bd-4093-af56-48cb1351f3b1]
    port lxd-net2-lr-lrp-int
        mac: "00:16:3e:92:07:9f"
        networks: ["10.27.67.1/24", "fd42:e884:58a4:58aa::1/64"]
    nat cc382ace-ff51-4acf-8373-89a7ae3cd112
        external ip: "10.201.72.230"
        logical ip: "10.27.67.0/24"
        type: "snat"

And Southbound:

QA1 qa1lxcluster04 ~# ovn-sbctl show
Chassis "90807679-74bd-4093-af56-48cb1351f3b1"
    hostname: qa1lxcluster01-lb
    Encap geneve
        ip: "10.201.72.151"
        options: {csum="true"}
    Port_Binding lxd-net2-instance-9c1fff4e-ee27-4d81-a033-1bc3448f33f3-eth0
Chassis "25ff4760-b144-4236-9746-88d874118dd3"
    hostname: qa1lxcluster03-lb
    Encap geneve
        ip: "10.201.72.153"
        options: {csum="true"}
    Port_Binding cr-lxd-net2-lr-lrp-ext
Chassis "655bfb5e-f4e2-4b3c-9e23-a37c7c664805"
    hostname: qa1lxcluster04-lb
    Encap geneve
        ip: "10.201.72.154"
        options: {csum="true"}
Chassis "776ab8a8-89ab-428e-bc7d-e58a80fc2741"
    hostname: qa1lxcluster02-lb
    Encap geneve
        ip: "10.201.72.152"
        options: {csum="true"}
    Port_Binding lxd-net2-instance-2d331122-d6d1-4630-b714-3ce3481f0259-eth0

Also, this is my HA gateway chassis configuration:

QA1 qa1lxcluster04 ~# ovn-nbctl lrp-get-gateway-chassis lxd-net2-lr-lrp-ext
lxd-net2-lr-lrp-ext-25ff4760-b144-4236-9746-88d874118dd3    40
lxd-net2-lr-lrp-ext-90807679-74bd-4093-af56-48cb1351f3b1    39
lxd-net2-lr-lrp-ext-655bfb5e-f4e2-4b3c-9e23-a37c7c664805    38
lxd-net2-lr-lrp-ext-776ab8a8-89ab-428e-bc7d-e58a80fc2741    37

And my LxC OVN configuration:

QA1 qa1lxcluster03 ~# lxc network show ovn-uplink-br1 
config:
  bridge.mtu: "1442"
  ipv4.address: 10.27.67.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:e884:58a4:58aa::1/64
  ipv6.nat: "true"
  network: uplink-br1
  volatile.network.ipv4.address: 10.201.72.230
description: ""
name: ovn-uplink-br1
type: ovn
used_by:
- /1.0/instances/c1
- /1.0/instances/c2
managed: true
status: Created
locations:
- qa1lxcluster01
- qa1lxcluster02
- qa1lxcluster03
- qa1lxcluster04
```bash

Firewall service disabled on every LXD node.

LXC networks:

```bash
QA1 qa1lxcluster03 ~# lxc network ls
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
|      NAME      |   TYPE   | MANAGED |      IPV4      |           IPV6            | DESCRIPTION | USED BY |  STATE  |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| br1            | bridge   | NO      |                |                           |             | 1       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| br-int         | bridge   | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eth0           | physical | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eth1           | physical | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| lxdovn1        | bridge   | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| ovn-uplink-br1 | ovn      | YES     | 10.172.50.1/24 | fd42:dde0:65f5:4aa6::1/64 |             | 2       | CREATED |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| uplink-br1     | physical | YES     |                |                           |             | 1       | CREATED |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+

Any ideas what troubleshoot I could do? Thanks in advance.

Running ping from the container c1

[root@c1 ~]# ping 8.8.8.8 -c4
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
— 8.8.8.8 ping statistics —
4 packets transmitted, 0 received, 100% packet loss, time 2999ms

On gateway chassis I ran tcpdump and I got:

QA1 qa1lxcluster03 ~# tcpdump -i eth1 host 8.8.8.8 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
15:22:11.542937 IP 10.201.72.230 > 8.8.8.8: ICMP echo request, id 1161, seq 1, length 64
15:22:12.541732 IP 10.201.72.230 > 8.8.8.8: ICMP echo request, id 1161, seq 2, length 64
15:22:13.541790 IP 10.201.72.230 > 8.8.8.8: ICMP echo request, id 1161, seq 3, length 64
15:22:14.542157 IP 10.201.72.230 > 8.8.8.8: ICMP echo request, id 1161, seq 4, length 64

Versions I’m using:

  • OVN: ovn-21.03.0-1.el7.x86_64
  • OpenvSwitch: openvswitch-2.15.90-1.el7.x86_64
  • LXD: lxd 5.5-37534be 23537 latest/stable canonical* -
  • Kernel: 3.10.0-1160.76.1.el7.x86_64
  • CentOS 7.9.2009

Southbound flows for 10.201.72.230 (OVN IPv4):

qa1lxcluster02 ~# ovn-sbctl lflow-list | grep 10.201.72.230
  table=3 (lr_in_ip_input     ), priority=120  , match=(inport == "lxd-net2-lr-lrp-ext" && ip4.src == 10.201.72.230), action=(next;)
  table=3 (lr_in_ip_input     ), priority=100  , match=(ip4.src == {10.201.72.230, 10.201.72.255} && reg9[0] == 0), action=(drop;)
  table=3 (lr_in_ip_input     ), priority=92   , match=(inport == "lxd-net2-lr-lrp-ext" && arp.op == 1 && arp.tpa == 10.201.72.230 && is_chassis_resident("cr-lxd-net2-lr-lrp-ext")), action=(eth.dst = eth.src; eth.src = xreg0[0..47]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[0..47]; arp.tpa = arp.spa; arp.spa = 10.201.72.230; outport = inport; flags.loopback = 1; output;)
  table=3 (lr_in_ip_input     ), priority=91   , match=(inport == "lxd-net2-lr-lrp-ext" && arp.op == 1 && arp.tpa == 10.201.72.230), action=(drop;)
  table=3 (lr_in_ip_input     ), priority=90   , match=(arp.op == 1 && arp.tpa == 10.201.72.230), action=(eth.dst = eth.src; eth.src = xreg0[0..47]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[0..47]; arp.tpa = arp.spa; arp.spa = 10.201.72.230; outport = inport; flags.loopback = 1; output;)
  table=3 (lr_in_ip_input     ), priority=90   , match=(inport == "lxd-net2-lr-lrp-ext" && arp.op == 1 && arp.tpa == 10.201.72.230 && arp.spa == 10.201.72.0/24 && is_chassis_resident("cr-lxd-net2-lr-lrp-ext")), action=(eth.dst = eth.src; eth.src = xreg0[0..47]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[0..47]; arp.tpa = arp.spa; arp.spa = 10.201.72.230; outport = inport; flags.loopback = 1; output;)
  table=3 (lr_in_ip_input     ), priority=90   , match=(ip4.dst == 10.201.72.230 && icmp4.type == 8 && icmp4.code == 0), action=(ip4.dst <-> ip4.src; ip.ttl = 255; icmp4.type = 0; flags.loopback = 1; next; )
  table=3 (lr_in_ip_input     ), priority=40   , match=(inport == "lxd-net2-lr-lrp-ext" && ip4 && ip.ttl == {0, 1} && !ip.later_frag), action=(icmp4 {eth.dst <-> eth.src; icmp4.type = 11; /* Time exceeded */ icmp4.code = 0; /* TTL exceeded in transit */ ip4.dst = ip4.src; ip4.src = 10.201.72.230; ip.ttl = 255; next; };)
  table=5 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 10.201.72.230 && inport == "lxd-net2-lr-lrp-ext" && is_chassis_resident("cr-lxd-net2-lr-lrp-ext")), action=(ct_snat;)
  table=10(lr_in_ip_routing   ), priority=49   , match=(ip4.dst == 10.201.72.0/24), action=(ip.ttl--; reg8[0..15] = 0; reg0 = ip4.dst; reg1 = 10.201.72.230; eth.src = 00:16:3e:60:eb:83; outport = "lxd-net2-lr-lrp-ext"; flags.loopback = 1; next;)
  table=10(lr_in_ip_routing   ), priority=1    , match=(ip4.dst == 0.0.0.0/0), action=(ip.ttl--; reg8[0..15] = 0; reg0 = 10.201.72.1; reg1 = 10.201.72.230; eth.src = 00:16:3e:60:eb:83; outport = "lxd-net2-lr-lrp-ext"; flags.loopback = 1; next;)
  table=14(lr_in_arp_resolve  ), priority=100  , match=(outport == "lxd-net2-lr-lrp-ext" && reg0 == 10.201.72.230), action=(eth.dst = 00:16:3e:60:eb:83; next;)
  table=14(lr_in_arp_resolve  ), priority=1    , match=(ip4.dst == {10.201.72.230}), action=(drop;)
  table=1 (lr_out_snat        ), priority=153  , match=(ip && ip4.src == 10.43.104.0/24 && outport == "lxd-net2-lr-lrp-ext" && is_chassis_resident("cr-lxd-net2-lr-lrp-ext")), action=(ct_snat(10.201.72.230);)
  table=2 (lr_out_egr_loop    ), priority=100  , match=(ip4.dst == 10.201.72.230 && outport == "lxd-net2-lr-lrp-ext" && is_chassis_resident("cr-lxd-net2-lr-lrp-ext")), action=(clone { ct_clear; inport = outport; outport = ""; flags = 0; flags.loopback = 1; reg0 = 0; reg1 = 0; reg2 = 0; reg3 = 0; reg4 = 0; reg5 = 0; reg6 = 0; reg7 = 0; reg8 = 0; reg9 = 0; reg9[0] = 1; next(pipeline=ingress, table=0); };)
  table=17(ls_in_arp_rsp      ), priority=100  , match=(arp.tpa == 10.201.72.230 && arp.op == 1 && inport == "lxd-net2-ls-ext-lsp-router"), action=(next;)
  table=17(ls_in_arp_rsp      ), priority=50   , match=(arp.tpa == 10.201.72.230 && arp.op == 1), action=(eth.dst = eth.src; eth.src = 00:16:3e:60:eb:83; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:16:3e:60:eb:83; arp.tpa = arp.spa; arp.spa = 10.201.72.230; outport = inport; flags.loopback = 1; output;)
  table=23(ls_in_l2_lkup      ), priority=80   , match=(flags[1] == 0 && arp.op == 1 && arp.tpa == { 10.201.72.230}), action=(clone {outport = "lxd-net2-ls-ext-lsp-router"; output; }; outport = "_MC_flood_l2"; output;)

Containers:

QA1 qa1lxcluster02 ~# lxc ls
+------+---------+--------------------+-----------------------------------------------+-----------+-----------+----------------+
| NAME |  STATE  |        IPV4        |                     IPV6                      |   TYPE    | SNAPSHOTS |    LOCATION    |
+------+---------+--------------------+-----------------------------------------------+-----------+-----------+----------------+
| c1   | RUNNING | 10.172.50.2 (eth0) | fd42:dde0:65f5:4aa6:216:3eff:fea0:729f (eth0) | CONTAINER | 0         | qa1lxcluster01 |
+------+---------+--------------------+-----------------------------------------------+-----------+-----------+----------------+
| c2   | RUNNING | 10.172.50.3 (eth0) | fd42:dde0:65f5:4aa6:216:3eff:fe91:b7ae (eth0) | CONTAINER | 0         | qa1lxcluster02 |
+------+---------+--------------------+-----------------------------------------------+-----------+-----------+----------------+

Containers routes:

QA1 qa1lxcluster02 ~# lxc exec c1 ip r
default via 10.172.50.1 dev eth0 
10.172.50.0/24 dev eth0 proto kernel scope link src 10.172.50.2 
169.254.0.0/16 dev eth0 scope link metric 1011 
QA1 qa1lxcluster02 ~# lxc exec c2 ip r
default via 10.172.50.1 dev eth0 
10.172.50.0/24 dev eth0 proto kernel scope link src 10.172.50.3 
169.254.0.0/16 dev eth0 scope link metric 1011

Hi, I’ve refomatted your post for clarity using the 3-backticks for console output sections.
And added the ovn tag to the post.

Please can you show output of lxc network show uplink-br1 --target=<cluster member> for each cluster member.

Please can you also show the output of lxc network info ovn-uplink-br1

Thanks

Can you ping 10.201.72.230 from the uplink network?

As the OVN router connected to the uplink network connected to eth1 will have its own MAC address you should check that VMware isn’t performing any MAC filtering and is operating in promiscuous mode so it can receive all frames for MAC addresses other than eth1.

QA1 qa1lxcluster01 ~# lxc network show uplink-br1 --target=qa1lxcluster01
config:
dns.nameservers: 8.8.8.8
ipv4.gateway: 10.201.72.4/24
ipv4.ovn.ranges: 10.201.72.230-10.201.72.240
parent: br1
volatile.last_state.created: “false”
description: “”
name: uplink-br1
type: physical
used_by:

  • /1.0/networks/ovn-uplink-br1
    managed: true
    status: Created
    locations:
  • qa1lxcluster03
  • qa1lxcluster04
  • qa1lxcluster01
  • qa1lxcluster02

QA1 qa1lxcluster01 ~# lxc network show uplink-br1 --target=qa1lxcluster02
config:
dns.nameservers: 8.8.8.8
ipv4.gateway: 10.201.72.4/24
ipv4.ovn.ranges: 10.201.72.230-10.201.72.240
parent: br1
volatile.last_state.created: “false”
description: “”
name: uplink-br1
type: physical
used_by:

  • /1.0/networks/ovn-uplink-br1
    managed: true
    status: Created
    locations:
  • qa1lxcluster01
  • qa1lxcluster02
  • qa1lxcluster03
  • qa1lxcluster04

QA1 qa1lxcluster01 ~# lxc network show uplink-br1 --target=qa1lxcluster03
config:
dns.nameservers: 8.8.8.8
ipv4.gateway: 10.201.72.4/24
ipv4.ovn.ranges: 10.201.72.230-10.201.72.240
parent: br1
volatile.last_state.created: “false”
description: “”
name: uplink-br1
type: physical
used_by:

  • /1.0/networks/ovn-uplink-br1
    managed: true
    status: Created
    locations:
  • qa1lxcluster01
  • qa1lxcluster02
  • qa1lxcluster03
  • qa1lxcluster04

QA1 qa1lxcluster01 ~# lxc network show uplink-br1 --target=qa1lxcluster04
config:
dns.nameservers: 8.8.8.8
ipv4.gateway: 10.201.72.4/24
ipv4.ovn.ranges: 10.201.72.230-10.201.72.240
parent: br1
volatile.last_state.created: “false”
description: “”
name: uplink-br1
type: physical
used_by:

  • /1.0/networks/ovn-uplink-br1
    managed: true
    status: Created
    locations:
  • qa1lxcluster01
  • qa1lxcluster02
  • qa1lxcluster03
  • qa1lxcluster04

QA1 qa1lxcluster01 ~# lxc network info ovn-uplink-br1
Name: ovn-uplink-br1
MAC address: 00:16:3e:39:18:04
MTU: 1442
State: up
Type: broadcast

IP addresses:
inet 10.153.114.1/24 (link)
inet6 fd42:d364:6d1a:6038::1/64 (link)

Network usage:
Bytes received: 0B
Bytes sent: 0B
Packets received: 0
Packets sent: 0

OVN:
Chassis: qa1lxcluster03-lb

QA1 qa1lxcluster01 ~# ping 10.201.72.230 -c2
PING 10.201.72.230 (10.201.72.230) 56(84) bytes of data.
From 10.201.72.151 icmp_seq=1 Destination Host Unreachable
From 10.201.72.151 icmp_seq=2 Destination Host Unreachable

— 10.201.72.230 ping statistics —
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 999ms
pipe 2

QA1 qa1lxcluster02 ~# ping 10.201.72.230 -c2
PING 10.201.72.230 (10.201.72.230) 56(84) bytes of data.
From 10.201.72.152 icmp_seq=1 Destination Host Unreachable
From 10.201.72.152 icmp_seq=2 Destination Host Unreachable

— 10.201.72.230 ping statistics —
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1001ms

QA1 qa1lxcluster03 ~# ping 10.201.72.230 -c2
PING 10.201.72.230 (10.201.72.230) 56(84) bytes of data.
64 bytes from 10.201.72.230: icmp_seq=1 ttl=254 time=0.476 ms
64 bytes from 10.201.72.230: icmp_seq=2 ttl=254 time=0.622 ms

— 10.201.72.230 ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.476/0.549/0.622/0.073 ms

QA1 qa1lxcluster04 ~# ping 10.201.72.230 -c2
PING 10.201.72.230 (10.201.72.230) 56(84) bytes of data.
From 10.201.72.154 icmp_seq=1 Destination Host Unreachable
From 10.201.72.154 icmp_seq=2 Destination Host Unreachable

— 10.201.72.230 ping statistics —
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 999ms
pipe 2

Only have ping reply on OVN Chassis (qa1lxcluster03)

I’ll check that.

VMware interfaces are operating in promiscuous mode.

On VMware, I needed accept Forged transmit to allow inbound and outbound traffic from the containers.

When the Forged transmits option is set to Accept, ESXi does not compare source and effective MAC addresses.

Thanks for your help, @tomp

1 Like