@stgraber certainly.
So first thing to check is whether LXD 4.3 allows a fresh LVM based cluster to be spun up. For this I’m using LXD’s own VM support to run a cluster of 3 Ubuntu Focal machines, with each node having access to a 5GB block device from the host, which will be the basis for the LVM storage pool.
First, lets create the sparse block files on the LXD host:
truncate -s 5G /home/user/cluster-v1.img
truncate -s 5G /home/user/cluster-v2.img
truncate -s 5G /home/user/cluster-v3.img
Now lets create the VMs for our cluster and add the extra disk to them:
lxc init images:ubuntu/focal cluster-v1 --vm
lxc init images:ubuntu/focal cluster-v2 --vm
lxc init images:ubuntu/focal cluster-v3 --vm
lxc config device add cluster-v1 lvm disk source=/home/user/cluster-v1.img
lxc config device add cluster-v2 lvm disk source=/home/user/cluster-v2.img
lxc config device add cluster-v3 lvm disk source=/home/user/cluster-v3.img
Now lets start the VMs:
lxc start cluster-v1 cluster-v2 cluster-v3
Wait for them to boot:
lxc ls
+------------+---------+------------------------+------------------------------------------------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------+---------+------------------------+------------------------------------------------+-----------------+-----------+
| cluster-v1 | RUNNING | 10.109.89.59 (enp5s0) | fd42:d37c:f0f2:a5f:216:3eff:fe4e:652b (enp5s0) | VIRTUAL-MACHINE | 0 |
+------------+---------+------------------------+------------------------------------------------+-----------------+-----------+
| cluster-v2 | RUNNING | 10.109.89.234 (enp5s0) | fd42:d37c:f0f2:a5f:216:3eff:fe64:7eff (enp5s0) | VIRTUAL-MACHINE | 0 |
+------------+---------+------------------------+------------------------------------------------+-----------------+-----------+
| cluster-v3 | RUNNING | 10.109.89.119 (enp5s0) | fd42:d37c:f0f2:a5f:216:3eff:fe95:9664 (enp5s0) | VIRTUAL-MACHINE | 0 |
+------------+---------+------------------------+------------------------------------------------+-----------------+-----------+
Now for each node lets install LXD 4.3:
lxc exec cluster-v1 -- apt install snapd lvm2 -y
lxc exec cluster-v1 -- snap install lxd
lxc exec cluster-v2 -- apt install snapd lvm2 -y
lxc exec cluster-v2 -- snap install lxd
lxc exec cluster-v3 -- apt install snapd lvm2 -y
lxc exec cluster-v3 -- snap install lxd
Lets create the initial cluster node on cluster-v1, and specifying the /dev/sdb
device as the source of the new LVM storage pool:
lxc shell cluster-v1
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=cluster-v1]:
What IP address or DNS name should be used to reach this node? [default=10.109.89.59]:
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again:
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: lvm
Create a new LVM pool? (yes/no) [default=yes]:
Would you like to use an existing empty disk or partition? (yes/no) [default=no]: yes
Path to the existing block device: /dev/sdb
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like to create a new Fan overlay network? (yes/no) [default=yes]:
What subnet should be used as the Fan underlay? [default=auto]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Lets check the storage pool has been created:
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LXDThinPool local twi-a-tz-- <3.00g 0.00 1.57
lxc storage show local
config:
lvm.thinpool_name: LXDThinPool
description: ""
name: local
driver: lvm
used_by:
- /1.0/profiles/default
status: Created
locations:
- cluster-v1
OK now on cluster-v2 and cluster-v3 we join them to cluster-v1 and specify the local device for the LVM pool as /dev/sdb
:
lxc shell cluster-v2
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=cluster-v2]:
What IP address or DNS name should be used to reach this node? [default=10.109.89.234]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.109.89.59
Cluster fingerprint: 38d0d144013f895413372fcce550a1d4aa99a7d518274a91e1a9eae0b205216e
You can validate this fingerprint by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "lvm.vg_name" property for storage pool "local":
Choose "source" property for storage pool "local": /dev/sdb
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Again, lets check each node’s LVM config:
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LXDThinPool local twi-a-tz-- <3.00g 0.00 1.57
lxc storage show local --target=cluster-v2
To start your first instance, try: lxc launch ubuntu:18.04
config:
lvm.thinpool_name: LXDThinPool
lvm.vg_name: local
source: local
volatile.initial_source: /dev/sdb
description: ""
name: local
driver: lvm
used_by:
- /1.0/profiles/default
status: Created
locations:
- cluster-v1
- cluster-v2
- cluster-v3
And to make sure its working, lets launch a container:
lxc shell cluster-v1
lxc launch images:alpine/3.12 c1
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LXDThinPool local twi-aotz-- <3.00g 7.45 1.59
containers_c1 local Vwi-aotz-k <9.32g LXDThinPool images_641c9cb5e352408e2bfb3005f7f830dabe86e8d8b6abbad308fdcfb4cf8242f8 2.38
images_641c9cb5e352408e2bfb3005f7f830dabe86e8d8b6abbad308fdcfb4cf8242f8 local Vwi---tz-k <9.32g LXDThinPool
So it appears that the basic LVM cluster functionality is working OK. We now need to figure out what is different in your configuration.