Help: Access to container port from any node in the cluster

Hi everyone,

I’m newbie on LXC/LXD and I’ve created a 3 nodes LXC cluster with 2 containers on each:
| NAME | STATE | IPV4 | LOCATION |
| default01 | RUNNING | 10.73.220.123 (eth0) | lxclusternode01 |
| default02 | RUNNING | 10.73.220.147 (eth0) | lxclusternode02 |
| default03 | RUNNING | 10.73.220.202 (eth0) | lxclusternode03 |
| default04 | RUNNING | 10.73.220.141 (eth0) | lxclusternode01 |
| default05 | RUNNING | 10.73.220.21 (eth0) | lxclusternode02 |
| default06 | RUNNING | 10.73.220.128 (eth0) | lxclusternode03 |

I’m able to connect to any container from every node:

  • access to container provisioned in the local node:
    lxclusternode01 ~# lxc exec default01 bash
    [root@default01 ~]# exit
  • access to container provisioned in other node:
    lxclusternode01 ~# lxc exec default02 bash
    [root@default01 ~]# exit

but if I tried to access to nginx service, i.e. using curl, it failed.

  • Access to nginx on container in the local node (SUCCESS):
    lxclusternode01 ~# curl -so /dev/null -L http://10.73.220.123/index.php -w ‘%{http_code}\n’
    200
  • Access to nginx on container in the local node (FAILED!):
    lxclusternode01 ~# curl -o /dev/null -L http://10.73.220.147/index.php -w ‘%{http_code}\n’
    —snip—
    curl: (7) Failed connect to 10.73.220.147:80; No route to host

All containers belong to the same network:
lxclusternode02 ~# lxc network show base
config:
ipv4.address: 10.73.220.1/24
ipv4.nat: “true”
ipv6.address: fd42:14c6:247d:f67c::1/64
ipv6.nat: “true”
description: “”
name: base
type: bridge
used_by:

  • /1.0/instances/default01
  • /1.0/instances/default02
  • /1.0/instances/default03
  • /1.0/instances/default04
  • /1.0/instances/default05
  • /1.0/instances/default06
    managed: true
    status: Created
    locations:
  • lxclusternode02
  • lxclusternode01
  • lxclusternode03

Am I missing more configuration to be able to connect to nginx from any node in the cluster?
I’ll appreciate any tip or documentation to solve this.

Thanks!

Networking is my weakest point but did you use a fan / OVN network? If not I think you have to set static routes on your router if not bridged to an externally rotatable network (I.E does your router know about this network?)

@tomp I think your person here, they seem to have all the network answers :smiley: .

I didn’t use fan / OVN network; I’ve created just the bridged network and assigned through a profile to all my containers.

If I add a static route on every node using the node where the container has been provisioned as gateway, it works.
Refining my question, I would like to know if there is a way the cluster manage that routing.

Refining my question, I would like to know if there is a way the cluster manage that routing.

I think you need a Fan network but I could be very wrong! Hopefully someone better than me will be along to offer some real advise :smile:

When using the lxc command requests for instances on other cluster members are transparently redirected to the correct cluster member, and thus do not depend on the networking setup of the instances themselves. This is why lxc exec works for you.

However for instance networking what you have done so far is instructed LXD to create a private managed bridge (that SNATs outbound connections to cluster member’s IP on external networks) on each cluster member using the same IP address and subnet.

But, these are discrete private bridges on each cluster member, they are not inter-connected with other cluster member’s private bridges.

This is why you can only reach instances on the same cluster member currently.

To remove that limitation you either need to:

  1. Stop using LXD’s managed private bridges and instead connect the instances directly a shared network that all of the cluster members are connected to.
  2. Use an overlay network like fan or ovn to create a shared network between the cluster members that the instances can connect to.

Please can you show the output of ip a and ip r on each cluster member?

Thank you very much for your advice, @tomp! I’ll reconfigure the cluster network to fan or ovn as you suggested and test again.

Here is the output of ip a and ip r on each cluster member:

lxclusternode01

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:c0:11 brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.151/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:c011/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:f0:65 brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.151/24 brd 10.201.72.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:f065/64 scope link
       valid_lft forever preferred_lft forever
4: base: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f6:d7:dd brd ff:ff:ff:ff:ff:ff
    inet 10.73.220.1/24 scope global base
       valid_lft forever preferred_lft forever
    inet6 fd42:14c6:247d:f67c::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef6:d7dd/64 scope link
       valid_lft forever preferred_lft forever
9: vethebe37a27@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 86:76:fa:cf:d8:0a brd ff:ff:ff:ff:ff:ff link-netnsid 1
17: vethaafeaf19@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 7e:1a:55:c6:6a:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
default via 10.201.72.4 dev eth1
10.73.220.0/24 dev base proto kernel scope link src 10.73.220.1
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.151
10.201.72.0/24 dev eth1 proto kernel scope link src 10.201.72.151
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003

lxclusternode02

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:41:90 brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.152/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:4190/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:4a:cd brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.152/24 brd 10.201.72.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:4acd/64 scope link
       valid_lft forever preferred_lft forever
5: base: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f6:d7:dd brd ff:ff:ff:ff:ff:ff
    inet 10.73.220.1/24 scope global base
       valid_lft forever preferred_lft forever
    inet6 fd42:14c6:247d:f67c::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef6:d7dd/64 scope link
       valid_lft forever preferred_lft forever
7: vethc75d0b06@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 16:30:f7:43:7e:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth39653f9d@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 3a:43:fc:4f:a1:fb brd ff:ff:ff:ff:ff:ff link-netnsid 1
default via 10.201.72.4 dev eth1
10.73.220.0/24 dev base proto kernel scope link src 10.73.220.1
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.152
10.201.72.0/24 dev eth1 proto kernel scope link src 10.201.72.152
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003

lxclusternode03

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:f6:7e brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.153/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:f67e/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:f1:2c brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.153/24 brd 10.201.72.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:f12c/64 scope link
       valid_lft forever preferred_lft forever
31: base: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f6:d7:dd brd ff:ff:ff:ff:ff:ff
    inet 10.73.220.1/24 scope global base
       valid_lft forever preferred_lft forever
    inet6 fd42:14c6:247d:f67c::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef6:d7dd/64 scope link
       valid_lft forever preferred_lft forever
59: veth04db8cb7@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 8a:a8:0c:d8:92:98 brd ff:ff:ff:ff:ff:ff link-netnsid 1
61: veth050a5240@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether f2:2d:95:49:1e:2d brd ff:ff:ff:ff:ff:ff link-netnsid 0
default via 10.201.72.4 dev eth1
10.73.220.0/24 dev base proto kernel scope link src 10.73.220.1
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.153
10.201.72.0/24 dev eth1 proto kernel scope link src 10.201.72.153
10.201.84.0/22 via 10.201.72.4 dev eth1
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003

Are 10.201.70.0/24 and 10.201.72.0/24 shared physical networks between the servers?

If you could setup a manual bridge ontop of eth0 and/or eth1, and then connect your instances directly to the external network, rather than use private bridges.

Yes, they are.

I don’t know how to do that, I’ve not a solid theoretical background on networking. I’ll search on the LXC documentation for help. Or I’ll appreciate if you put me on the right direction, @tomp .

This is a very similar to a question the other day:

Also see netplan/examples at main · canonical/netplan · GitHub

Thanks again for the support, @tomp. I’ll read those documents and begin to work with that configuration.

1 Like

I’ve created a bridge interface br1 on each cluster node and I’ve also check configuration for the UPLINK interface:

lxclusternode01 ~# lxc network get UPLINK ipv4.ovn.ranges
10.201.72.230-10.201.72.240
lxclusternode01 ~# lxc network get UPLINK ipv4.gateway
10.201.72.4/24
lxclusternode01 ~# lxc network get UPLINK dns.nameservers
10.201.112.254
The ovn-controller service is running in ALL nodes and ovsdb-server and ovs-switchd are running in an odd number of nodes.
With all those configurations, I’m able, from a container, to ping the local IPv4 address but I can’t connect to internet:

[root@c01 ~]# ping 10.132.118.2
PING 10.132.118.2 (10.132.118.2) 56(84) bytes of data.
64 bytes from 10.132.118.2: icmp_seq=1 ttl=64 time=0.032 ms
^C
— 10.132.118.2 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
[root@c01 ~]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
— 8.8.8.8 ping statistics —
2 packets transmitted, 0 received, 100% packet loss, time 999ms

But from the cluster node I do:
lxclusternode01 ~# ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 10.201.72.9 icmp_seq=1 Destination Net Unreachable

— 8.8.8.8 ping statistics —
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

Any clue what do I need to check? I’ve seen twice the video (OVN and a LXD cluster) to double-check if I configured something wrong. I’m using CentOS instead of Ubuntu but I don’t think that’s the problem (please, correct me if I’m wrong).

Any suggestion will be deeply appreciated, I think @tomp is who could help me most.

This is the bridge configuration I’ve created over each physical interface for every cluster nodes:

lxclusternode01 network-scripts# more ifcfg-*1
::::::::::::::
ifcfg-br1
::::::::::::::
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Bridge
BOOTPROTO=static
DEVICE=br1
IPADDR=10.201.72.151
NETMASK=255.255.255.0
GATEWAY=10.201.72.4

::::::::::::::
ifcfg-eth1
::::::::::::::
DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
BRIDGE=br1

On br1 I’ve created the OVN configuration and OVN on LXD. From each cluster node I have connectivity to Internet but I’ve not from the containers. I don’t know what else check in my configuration.

So to confirm the br1 on each LXD server is connected to the same physical L2 network?

Please show ip a and ip r on each LXD server, as well as the output of lxc network show <ovn network?

Here’s all the data: ip a, ip r and lxc network show ovn-bridge-net:

lxclusternode01:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:0a:76 brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.151/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:a76/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:50:56:91:05:d5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe91:5d5/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether ba:ed:51:a9:ef:5e brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000
    link/ether 66:fb:a7:a0:80:e4 brd ff:ff:ff:ff:ff:ff
31: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:50:56:91:05:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.151/24 brd 10.201.72.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:5d5/64 scope link
       valid_lft forever preferred_lft forever
32: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 52:63:92:62:73:12 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5063:92ff:fe62:7312/64 scope link
       valid_lft forever preferred_lft forever
33: lxdovn1b@lxdovn1a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 66:62:37:26:a3:f5 brd ff:ff:ff:ff:ff:ff
34: lxdovn1a@lxdovn1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether 9e:05:8a:4f:ca:aa brd ff:ff:ff:ff:ff:ff
35: lxdovn1: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 02:c4:a0:2d:fa:48 brd ff:ff:ff:ff:ff:ff
39: veth6ea19323@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 82:cf:c8:e5:75:e3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
default via 10.201.72.4 dev br1
10.30.16.0/24 via 10.201.70.1 dev eth0
10.40.16.0/24 via 10.201.70.1 dev eth0
10.49.24.0/24 via 10.201.70.1 dev eth0
10.196.43.178 via 10.201.70.1 dev eth0
10.201.16.0/24 via 10.201.70.1 dev eth0
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.151
10.201.72.0/24 dev br1 proto kernel scope link src 10.201.72.151
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev br1 scope link metric 1031
config:
  bridge.mtu: "1442"
  ipv4.address: 10.195.44.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:df87:7c05:e876::1/64
  ipv6.nat: "true"
  network: UPLINK
  volatile.network.ipv4.address: 10.201.72.230
description: ""
name: ovn-bridge-net
type: ovn
used_by:
- /1.0/instances/c02
- /1.0/instances/c10
managed: true
status: Created
locations:
- lxclusternode01
- lxclusternode02
- lxclusternode03

lxclusternode02

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:67:4c brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.152/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:674c/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:50:56:91:3c:ef brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe91:3cef/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 92:e0:b0:61:bb:84 brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000
    link/ether de:ba:11:fd:f8:80 brd ff:ff:ff:ff:ff:ff
29: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:50:56:91:3c:ef brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.152/24 brd 10.201.72.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:3cef/64 scope link
       valid_lft forever preferred_lft forever
30: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 86:ce:e3:e3:fc:6e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::84ce:e3ff:fee3:fc6e/64 scope link
       valid_lft forever preferred_lft forever
31: lxdovn1b@lxdovn1a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 56:86:20:ac:63:45 brd ff:ff:ff:ff:ff:ff
32: lxdovn1a@lxdovn1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether a6:f0:a6:68:88:a1 brd ff:ff:ff:ff:ff:ff
33: lxdovn1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 6e:34:23:d8:fc:43 brd ff:ff:ff:ff:ff:ff
37: vethd37c7c56@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether aa:97:6a:13:dd:14 brd ff:ff:ff:ff:ff:ff link-netnsid 0
default via 10.201.72.4 dev br1
10.30.16.0/24 via 10.201.70.1 dev eth0
10.40.16.0/24 via 10.201.70.1 dev eth0
10.49.24.0/24 via 10.201.70.1 dev eth0
10.196.43.178 via 10.201.70.1 dev eth0
10.201.16.0/24 via 10.201.70.1 dev eth0
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.152
10.201.72.0/24 dev br1 proto kernel scope link src 10.201.72.152
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev br1 scope link metric 1029
config:
  bridge.mtu: "1442"
  ipv4.address: 10.195.44.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:df87:7c05:e876::1/64
  ipv6.nat: "true"
  network: UPLINK
  volatile.network.ipv4.address: 10.201.72.230
description: ""
name: ovn-bridge-net
type: ovn
used_by:
- /1.0/instances/c02
- /1.0/instances/c10
managed: true
status: Created
locations:
- lxclusternode01
- lxclusternode02
- lxclusternode03

lxclusternode03

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:b0:7d brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.153/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:b07d/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:50:56:91:be:50 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe91:be50/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a6:4b:f9:0c:8f:2e brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether be:8b:b3:a8:dc:cc brd ff:ff:ff:ff:ff:ff
29: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:50:56:91:be:50 brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.153/24 brd 10.201.72.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:be50/64 scope link
       valid_lft forever preferred_lft forever
30: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether b6:05:ec:5b:e5:32 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b405:ecff:fe5b:e532/64 scope link
       valid_lft forever preferred_lft forever
31: lxdovn1b@lxdovn1a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether ca:ac:2c:aa:4e:48 brd ff:ff:ff:ff:ff:ff
node32: lxdovn1a@lxdovn1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP group default qlen 1000
    link/ether d6:0b:95:d9:aa:06 brd ff:ff:ff:ff:ff:ff
33: lxdovn1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 9e:57:c4:fe:c6:4d brd ff:ff:ff:ff:ff:ff
default via 10.201.72.4 dev br1
10.30.16.0/24 via 10.201.70.1 dev eth0
10.40.16.0/24 via 10.201.70.1 dev eth0
10.49.24.0/24 via 10.201.70.1 dev eth0
10.196.43.178 via 10.201.70.1 dev eth0
10.201.16.0/24 via 10.201.70.1 dev eth0
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.153
10.201.72.0/24 dev br1 proto kernel scope link src 10.201.72.153
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev br1 scope link metric 1029
config:
  bridge.mtu: "1442"
  ipv4.address: 10.195.44.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:df87:7c05:e876::1/64
  ipv6.nat: "true"
  network: UPLINK
  volatile.network.ipv4.address: 10.201.72.230
description: ""
name: ovn-bridge-net
type: ovn
used_by:
- /1.0/instances/c02
- /1.0/instances/c10
managed: true
status: Created
locations:
- lxclusternode01
- lxclusternode02
- lxclusternode03

These are my LXC networks and containers:

lxclusternode01 ~# lxc network ls
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
|      NAME      |   TYPE   | MANAGED |      IPV4      |           IPV6            | DESCRIPTION | USED BY |  STATE  |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| UPLINK         | physical | YES     |                |                           |             | 1       | CREATED |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| br1            | bridge   | NO      |                |                           |             | 1       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| br-int         | bridge   | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eth0           | physical | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eth1           | physical | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| lxdovn1        | bridge   | NO      |                |                           |             | 0       |         |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| ovn-bridge-net | ovn      | YES     | 10.195.44.1/24 | fd42:df87:7c05:e876::1/64 |             | 2       | CREATED |
+----------------+----------+---------+----------------+---------------------------+-------------+---------+---------+
QA1 qa1lxcluster01 ~# lxc ls -c nS4tL
+------+-----------+--------------------+-----------+----------------+
| NAME | SNAPSHOTS |        IPV4        |   TYPE    |    LOCATION    |
+------+-----------+--------------------+-----------+----------------+
| c02  | 0         | 10.195.44.3 (eth0) | CONTAINER | lxclusternode02 |
+------+-----------+--------------------+-----------+----------------+
| c10  | 0         | 10.195.44.2 (eth0) | CONTAINER | lxclusternode01 |
+------+-----------+--------------------+-----------+----------------+

Please can you also show me lxc network show UPLINK?

This is the UPLINK lxc network configuration:

lxclusternode01 ~# lxc network show UPLINK
config:
dns.nameservers: 8.8.8.8
ipv4.gateway: 10.201.72.4/24
ipv4.ovn.ranges: 10.201.72.230-10.201.72.240
volatile.last_state.created: “false”
description: “”
name: UPLINK
type: physical
used_by:

  • /1.0/networks/ovn-bridge-net
    managed: true
    status: Created
    locations:
  • lxclusternode02
  • lxclusternode03
  • lxclusternode01