LXD cluster: How fan network addressing works?

networking
cluster

(Mirto Busico) #1

Hi all,
I’m very confused regarding addressing in lxd cluster

If I do an lsxd init and accept the fan autoconfiguration I see

sysop@kvmnode1:~$ lxc network list
±--------±---------±--------±------------±--------±--------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY | STATE |
±--------±---------±--------±------------±--------±--------+
| ens3 | physical | NO | | 0 | |
±--------±---------±--------±------------±--------±--------+
| ens4 | physical | NO | | 0 | |
±--------±---------±--------±------------±--------±--------+
| ens5 | physical | NO | | 0 | |
±--------±---------±--------±------------±--------±--------+
| lxdfan0 | bridge | YES | | 0 | CREATED |
±--------±---------±--------±------------±--------±--------+
sysop@kvmnode1:~$ lxc network show lxdfan0
config:
bridge.mode: fan
fan.underlay_subnet: 192.168.202.0/24
description: “”
name: lxdfan0
type: bridge
used_by: []
managed: true
status: Created
locations:
- kvmnode1
sysop@kvmnode1:~$

This for me seems to indicate that I can have a maximum of 256 addresses in the C class network 192.168.202.0

But looking at the IP assigned to the machine I see that fan bridge have a class A network assignement of 240.11.0.1/8

26: lxdfan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 56:4c:0b:e7:64:9b brd ff:ff:ff:ff:ff:ff
inet 240.11.0.1/8 scope global lxdfan0
valid_lft forever preferred_lft forever
inet6 fe80::544c:bff:fee7:649b/64 scope link
valid_lft forever preferred_lft forever
27: lxdfan0-mtu: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UNKNOWN group default qlen 1000
link/ether ea:46:d0:c5:5d:ad brd ff:ff:ff:ff:ff:ff
inet6 fe80::e846:d0ff:fec5:5dad/64 scope link
valid_lft forever preferred_lft forever
28: lxdfan0-fan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UNKNOWN group default qlen 1000
link/ether 56:4c:0b:e7:64:9b brd ff:ff:ff:ff:ff:ff
inet6 fe80::544c:bff:fee7:649b/64 scope link
valid_lft forever preferred_lft forever
sysop@kvmnode1:~$

With this setup, my questions are:

  • how many lxc container I can define in the single lxd cluster node?
  • how many lxc container I can define in the three node cluster?

(Yosu Cadilla) #2

FAN networking is one thing and the implementation of FAN networking on LXD is a subset, which is best catered to standard LXD uses.

On LXD you should get a /24 network available for internal use on each of your nodes with a maximum of 254 nodes. So a server with 10 IP addresses could potentially get 10 x 255 x 255 FAN IPs

The maximum theoretical limit for a fan, I think is 65K IPs for each IP on the underlay with the current implementation, but in theory they could go for a /8 network with 16MM+ IPs

Oh! if this is not enough, I guess you could start creating more FANs out of each IP on a FAN, so there would be no limit really…