Help: Access to container port from any node in the cluster

Hi everyone,

I’m newbie on LXC/LXD and I’ve created a 3 nodes LXC cluster with 2 containers on each:
| NAME | STATE | IPV4 | LOCATION |
| default01 | RUNNING | 10.73.220.123 (eth0) | lxclusternode01 |
| default02 | RUNNING | 10.73.220.147 (eth0) | lxclusternode02 |
| default03 | RUNNING | 10.73.220.202 (eth0) | lxclusternode03 |
| default04 | RUNNING | 10.73.220.141 (eth0) | lxclusternode01 |
| default05 | RUNNING | 10.73.220.21 (eth0) | lxclusternode02 |
| default06 | RUNNING | 10.73.220.128 (eth0) | lxclusternode03 |

I’m able to connect to any container from every node:

  • access to container provisioned in the local node:
    lxclusternode01 ~# lxc exec default01 bash
    [root@default01 ~]# exit
  • access to container provisioned in other node:
    lxclusternode01 ~# lxc exec default02 bash
    [root@default01 ~]# exit

but if I tried to access to nginx service, i.e. using curl, it failed.

  • Access to nginx on container in the local node (SUCCESS):
    lxclusternode01 ~# curl -so /dev/null -L http://10.73.220.123/index.php -w ‘%{http_code}\n’
    200
  • Access to nginx on container in the local node (FAILED!):
    lxclusternode01 ~# curl -o /dev/null -L http://10.73.220.147/index.php -w ‘%{http_code}\n’
    —snip—
    curl: (7) Failed connect to 10.73.220.147:80; No route to host

All containers belong to the same network:
lxclusternode02 ~# lxc network show base
config:
ipv4.address: 10.73.220.1/24
ipv4.nat: “true”
ipv6.address: fd42:14c6:247d:f67c::1/64
ipv6.nat: “true”
description: “”
name: base
type: bridge
used_by:

  • /1.0/instances/default01
  • /1.0/instances/default02
  • /1.0/instances/default03
  • /1.0/instances/default04
  • /1.0/instances/default05
  • /1.0/instances/default06
    managed: true
    status: Created
    locations:
  • lxclusternode02
  • lxclusternode01
  • lxclusternode03

Am I missing more configuration to be able to connect to nginx from any node in the cluster?
I’ll appreciate any tip or documentation to solve this.

Thanks!

Networking is my weakest point but did you use a fan / OVN network? If not I think you have to set static routes on your router if not bridged to an externally rotatable network (I.E does your router know about this network?)

@tomp I think your person here, they seem to have all the network answers :smiley: .

I didn’t use fan / OVN network; I’ve created just the bridged network and assigned through a profile to all my containers.

If I add a static route on every node using the node where the container has been provisioned as gateway, it works.
Refining my question, I would like to know if there is a way the cluster manage that routing.

Refining my question, I would like to know if there is a way the cluster manage that routing.

I think you need a Fan network but I could be very wrong! Hopefully someone better than me will be along to offer some real advise :smile:

When using the lxc command requests for instances on other cluster members are transparently redirected to the correct cluster member, and thus do not depend on the networking setup of the instances themselves. This is why lxc exec works for you.

However for instance networking what you have done so far is instructed LXD to create a private managed bridge (that SNATs outbound connections to cluster member’s IP on external networks) on each cluster member using the same IP address and subnet.

But, these are discrete private bridges on each cluster member, they are not inter-connected with other cluster member’s private bridges.

This is why you can only reach instances on the same cluster member currently.

To remove that limitation you either need to:

  1. Stop using LXD’s managed private bridges and instead connect the instances directly a shared network that all of the cluster members are connected to.
  2. Use an overlay network like fan or ovn to create a shared network between the cluster members that the instances can connect to.

Please can you show the output of ip a and ip r on each cluster member?

Thank you very much for your advice, @tomp! I’ll reconfigure the cluster network to fan or ovn as you suggested and test again.

Here is the output of ip a and ip r on each cluster member:

lxclusternode01

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:c0:11 brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.151/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:c011/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:f0:65 brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.151/24 brd 10.201.72.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:f065/64 scope link
       valid_lft forever preferred_lft forever
4: base: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f6:d7:dd brd ff:ff:ff:ff:ff:ff
    inet 10.73.220.1/24 scope global base
       valid_lft forever preferred_lft forever
    inet6 fd42:14c6:247d:f67c::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef6:d7dd/64 scope link
       valid_lft forever preferred_lft forever
9: vethebe37a27@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 86:76:fa:cf:d8:0a brd ff:ff:ff:ff:ff:ff link-netnsid 1
17: vethaafeaf19@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 7e:1a:55:c6:6a:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
default via 10.201.72.4 dev eth1
10.73.220.0/24 dev base proto kernel scope link src 10.73.220.1
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.151
10.201.72.0/24 dev eth1 proto kernel scope link src 10.201.72.151
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003

lxclusternode02

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:41:90 brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.152/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:4190/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:4a:cd brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.152/24 brd 10.201.72.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:4acd/64 scope link
       valid_lft forever preferred_lft forever
5: base: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f6:d7:dd brd ff:ff:ff:ff:ff:ff
    inet 10.73.220.1/24 scope global base
       valid_lft forever preferred_lft forever
    inet6 fd42:14c6:247d:f67c::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef6:d7dd/64 scope link
       valid_lft forever preferred_lft forever
7: vethc75d0b06@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 16:30:f7:43:7e:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth39653f9d@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 3a:43:fc:4f:a1:fb brd ff:ff:ff:ff:ff:ff link-netnsid 1
default via 10.201.72.4 dev eth1
10.73.220.0/24 dev base proto kernel scope link src 10.73.220.1
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.152
10.201.72.0/24 dev eth1 proto kernel scope link src 10.201.72.152
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003

lxclusternode03

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:f6:7e brd ff:ff:ff:ff:ff:ff
    inet 10.201.70.153/24 brd 10.201.70.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:f67e/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:91:f1:2c brd ff:ff:ff:ff:ff:ff
    inet 10.201.72.153/24 brd 10.201.72.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe91:f12c/64 scope link
       valid_lft forever preferred_lft forever
31: base: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f6:d7:dd brd ff:ff:ff:ff:ff:ff
    inet 10.73.220.1/24 scope global base
       valid_lft forever preferred_lft forever
    inet6 fd42:14c6:247d:f67c::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef6:d7dd/64 scope link
       valid_lft forever preferred_lft forever
59: veth04db8cb7@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether 8a:a8:0c:d8:92:98 brd ff:ff:ff:ff:ff:ff link-netnsid 1
61: veth050a5240@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master base state UP group default qlen 1000
    link/ether f2:2d:95:49:1e:2d brd ff:ff:ff:ff:ff:ff link-netnsid 0
default via 10.201.72.4 dev eth1
10.73.220.0/24 dev base proto kernel scope link src 10.73.220.1
10.201.70.0/24 dev eth0 proto kernel scope link src 10.201.70.153
10.201.72.0/24 dev eth1 proto kernel scope link src 10.201.72.153
10.201.84.0/22 via 10.201.72.4 dev eth1
10.201.112.0/24 via 10.201.70.1 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003

Are 10.201.70.0/24 and 10.201.72.0/24 shared physical networks between the servers?

If you could setup a manual bridge ontop of eth0 and/or eth1, and then connect your instances directly to the external network, rather than use private bridges.

Yes, they are.

I don’t know how to do that, I’ve not a solid theoretical background on networking. I’ll search on the LXC documentation for help. Or I’ll appreciate if you put me on the right direction, @tomp .

This is a very similar to a question the other day:

Also see Netplan | Backend-agnostic network configuration in YAML

Thanks again for the support, @tomp. I’ll read those documents and begin to work with that configuration.

1 Like

I’ve created a bridge interface br1 on each cluster node and I’ve also check configuration for the UPLINK interface:

lxclusternode01 ~# lxc network get UPLINK ipv4.ovn.ranges
10.201.72.230-10.201.72.240
lxclusternode01 ~# lxc network get UPLINK ipv4.gateway
10.201.72.4/24
lxclusternode01 ~# lxc network get UPLINK dns.nameservers
10.201.112.254
The ovn-controller service is running in ALL nodes and ovsdb-server and ovs-switchd are running in an odd number of nodes.
With all those configurations, I’m able, from a container, to ping the local IPv4 address but I can’t connect to internet:

[root@c01 ~]# ping 10.132.118.2
PING 10.132.118.2 (10.132.118.2) 56(84) bytes of data.
64 bytes from 10.132.118.2: icmp_seq=1 ttl=64 time=0.032 ms
^C
— 10.132.118.2 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
[root@c01 ~]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
— 8.8.8.8 ping statistics —
2 packets transmitted, 0 received, 100% packet loss, time 999ms

But from the cluster node I do:
lxclusternode01 ~# ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 10.201.72.9 icmp_seq=1 Destination Net Unreachable

— 8.8.8.8 ping statistics —
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

Any clue what do I need to check? I’ve seen twice the video (OVN and a LXD cluster) to double-check if I configured something wrong. I’m using CentOS instead of Ubuntu but I don’t think that’s the problem (please, correct me if I’m wrong).

Any suggestion will be deeply appreciated, I think @tomp is who could help me most.

This is the bridge configuration I’ve created over each physical interface for every cluster nodes:

lxclusternode01 network-scripts# more ifcfg-*1
::::::::::::::
ifcfg-br1
::::::::::::::
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Bridge
BOOTPROTO=static
DEVICE=br1
IPADDR=10.201.72.151
NETMASK=255.255.255.0
GATEWAY=10.201.72.4

::::::::::::::
ifcfg-eth1
::::::::::::::
DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
BRIDGE=br1

On br1 I’ve created the OVN configuration and OVN on LXD. From each cluster node I have connectivity to Internet but I’ve not from the containers. I don’t know what else check in my configuration.