Adding new node to cluster. Failed to create network 'lxdfan0': No address found in subnet

Upon adding second node to a cluster (of one) i get the following error message:

Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create network 'lxdfan0': No address found in subnet.

Here’s my output from the second new host:

sudo lxd init
[sudo] password for user: 
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=lxd02]: 
What IP address or DNS name should be used to reach this node? [default=192.168.10.10]: 
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.10.10.1
Cluster fingerprint: a619772466e89ce41dea99807187b7ded58d1b806c06f6d23ee2c5eb959b7e17
You can validate this fingerprint by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password: 
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "size" property for storage pool "local": 
Choose "source" property for storage pool "local": 
Choose "zfs.pool_name" property for storage pool "local": 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools: []
profiles: []
cluster:
  server_name: lxd02
  enabled: true
  member_config:
  - entity: storage-pool
    name: local
    key: size
    value: ""
    description: '"size" property for storage pool "local"'
  - entity: storage-pool
    name: local
    key: source
    value: ""
    description: '"source" property for storage pool "local"'
  - entity: storage-pool
    name: local
    key: zfs.pool_name
    value: ""
    description: '"zfs.pool_name" property for storage pool "local"'
  cluster_address: 10.10.10.1:8443
  cluster_certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB7zCCAXSgAwIBAgIQPq93+3iloZHV1YAHTqW9ozAKBggqhkjOPQQDAzAzMRww
   <SNIP>
    5P7F18x5TR0/CxNyph6//gIxALlWtFbgG30xSktU4qytMM5tGF/qxlD70JbD+itl
    Uqd5AliNfMggH6qCkjTYXzyyQQ==
    -----END CERTIFICATE-----
  server_address: 192.168.10.10:8443
  cluster_password: passymcpasswordface

Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create network 'lxdfan0': No address found in subnet

Should i create lxdfan0 and assign it to the same network as lxdfan0 on the primary existing host? Or something else?

I’m a little puzzled.

Thanks

You don’t have to create the network yourself. LXD has tried to do it for you, but has failed to do so. @stgraber or @tomp any idea what the No address found in subnet is due to? Perhaps some host-specific configuration?

Thanks, If it helps I am using version 3.18 from snap. (sorry should have mentioned)

It is looking for the first address on the node that is within the subnet of the FAN network.

@sarlacpit could you provide the existing FAN configuration you have on the other nodes, and the network configuration on the new nodes.

Perhaps the new node is unable to configure the FAN IP address.

Thanks for the reply. Please see below (please excuse the formatting) The two servers are on different networks but the routing is there and they can see each other on port tcp/8443

LXD01
lxc network show lxdfan0

config:
bridge.mode: fan
fan.underlay_subnet: 10.10.10.0/24
description: “”
name: lxdfan0
type: bridge
used_by: []
managed: true
status: Created
locations:
-lxd01

lxc network list
±--------±---------±--------±------------±--------±--------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY | STATE | ±--------±---------±--------±------------±--------±--------+
| eno1 | physical | NO | | 7 | |
±--------±---------±--------±------------±--------±--------+
| eno2 | physical | NO | | 0 | |
±--------±---------±--------±------------±--------±--------+
| lxdfan0 | bridge | YES | | 0 | CREATED |
±--------±---------±--------±------------±--------±--------+

LXD02 (the new node to be added to the cluster).

network:
    ethernets:
        eno1:
            addresses:
            - 192.168.10.10/24
            gateway4: 192.168.10.254
            nameservers:
                addresses:
                - 192.168.10.5

lxc network list
±----------±---------±--------±------------±--------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY |
±----------±---------±--------±------------±--------+
| eno1 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| eno2 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| eno3 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| eno4 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| enp4s0f0 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| enp4s0f1 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| enp65s0f0 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| enp65s0f1 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| enp67s0f0 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+
| enp67s0f1 | physical | NO | | 0 |
±----------±---------±--------±------------±--------+

LXD01 seems to have the following fan related interfaces…

lxdfan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 240.50.0.1  netmask 255.0.0.0  broadcast 0.0.0.0
        inet6 fe80::d006:71ff:fe56:f241  prefixlen 64  scopeid 0x20<link>
        ether 32:10:e9:02:7d:64  txqueuelen 1000  (Ethernet)
        RX packets 194  bytes 7744 (7.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 369  bytes 25966 (25.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lxdfan0-fan: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet6 fe80::3010:e9ff:fe02:7d64  prefixlen 64  scopeid 0x20<link>
        ether 32:10:e9:02:7d:64  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 885 overruns 0  carrier 0  collisions 0

lxdfan0-mtu: flags=195<UP,BROADCAST,RUNNING,NOARP>  mtu 1450
        inet6 fe80::80af:c9ff:fe19:2dd5  prefixlen 64  scopeid 0x20<link>
        ether 82:af:c9:19:2d:d5  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 882  bytes 58660 (58.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lxdfan0 appears to have a different network than shown from my previous lxc network show lxdfan0.
but all this was done automatically on running init.

Here is the process i followed…

1, LXD01 removed the default lxd and installed it using snap
2, Ran lxd init
3, Migrated some VMs from ESXi using lxd-p2c.
4, Set up a profile to use macvlans and brought the containers live.
5, After a stable period of a week or two, I built a new server (LXD02) in a different location on a different subnet though the two can see each other.
6, Ran sudo lxd init - This fails at the very end of the wizard setup with the error.

Would have made a difference if on the first node I did “lxd init” without sudo?

That would indicate that LXD02 isn’t in the fan subnet that’s used by your cluster (10.10.10.0/24).

1 Like

Thanks for the reply,

Sorry, I think I am missing the point.
Does that mean, in order to cluster the nodes they need to be in the same subnet?
Is the fan subnet supposed to mirror the network interface of the node?
LXD02 is failing to create the fan network so it won’t be in the fan subnet.

In order for a cluster to work when it has fan bridges, the fan underlay subnet must be the same on all hosts.

Your configured underlay currently requires all your cluster nodes to be within 10.10.10.0/24.
You could have set the underlay subnet to 10.10.0.0/16 for example which would allow for less containers per host but may cover all your servers.

2 Likes

Thanks.

Can i make this change live in production by doing “lxc network edit lxdfan0” Or is it discouraged?
Do i need to do anything else once done to ensure the config is applied?

Finally, I’m a little confused… Why would /16 give less containers per host?

Thanks again

Changing it live should work, but it will change the subnet of any running container, so you’ll need to restart them all.

The fan works by using a /8 subnet, embedding the unique part of the underlay in the address and making the rest available to each host for its containers. Increase the size of the underlay, in your case from /24 to /16, means 8bit are no longer available for local containers, so going from a /16 per host down to a /24.

2 Likes

That’s fine - that could work for me.
Thanks for your valuable time.