LXC cluster with existing LVM thinpool

Hi there,

I’m using LXC for quite a while on an ODROID-XU4… Now I want to setup an LXC cluster on two ODROID but one of it has an existing LVM Thinpool which I want to use because there are some LVMs which I don’t want to delete.

I create the storage pool with these commands:

root@odroid-xu4:~# lxc storage create --target odroid-hc1 lvm lvm source=local
Storage pool lvm pending on member odroid-hc1
root@odroid-xu4:~# lxc storage create --target odroid-xu4 lvm lvm source=local
Storage pool lvm pending on member odroid-xu4
root@odroid-xu4:~# lxc storage create lvm lvm
Error: volume group "local" is not empty

But using the lvm.thinpool_name parameter is not possible in a cluster

root@odroid-xu4:~# lxc storage create --target odroid-hc1 lvm lvm source=local lvm.thinpool_name=LXDThinpool
Error: Config key 'lvm.thinpool_name' may not be used as node-specific key

Is there a way to create a storage pool on a cluster with existing logical volumes?

Kind regards
Andreas

@brauner @freeekanayaka

Now I tried a different way,

only on HC1 I have some existing logical volumes. So I decided to bootstrap again…

“lxd init” on odroid-hc1 was fine and is the first member of the cluster, then I could create the storage pool…

After that I bootstrapped XU4 and tried to join the existing cluster but got the following error message:

root@odroid-xu4:/usr/src# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=odroid-xu4]:
What IP address or DNS name should be used to reach this node? [default=10.166.0.3]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: odroid-hc1
Cluster fingerprint: *a_very_cool_fingerprint*
You can validate this fingerpring by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] ********
Invalid input, try again.

All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose the local disk or dataset for storage pool "lvm" (empty for loop disk):
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config:
  core.https_address: 10.166.0.3:8443
cluster:
  server_name: odroid-xu4
  enabled: true
  cluster_address: odroid-hc1:8443
  cluster_certificate: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  cluster_password: ********
networks: []
storage_pools:
- config:
    lvm.thinpool_name: LXDThinpool
    lvm.vg_name: local
    source: ""
  description: ""
  name: lvm
  driver: lvm
profiles:
- config: {}
  description: ""
  devices: {}
  name: default

Error: Failed to update storage pool 'lvm': node-specific config key source can't be changed

kind regards
Andreas

So I don’t know anything about node specific keys so @freeekanayaka might better help here.

Most config keys for storage pools are required to be the same for all nodes.

However we have some node-specific keys a certain config option needs to be associated to a certain node and be specific to it. For example, “source” and “zfs.pool_name”.

From what I understood talking with @brauner, “lvm.thinpool_name” and “lvm.vg_name” should be both node-specific keys, so the fact that they are not seems a bug/oversight. @stgraber, should I push a PR to make those keys node-specific? I believe things will just work after that, but I don’t have enough knowledge of LVM storage pools to tell for sure (would need testing).

@freeekanayaka yes, please send a PR

Submitted https://github.com/lxc/lxd/pull/4823

Thanks for the quick fix…

Appreciate your work

Kind regards
Andreas