How to add a node to cluster and use existing zpool

Hi,

I’m trying to add a node to a new cluster but I’m getting error relating to the zpool/dataset.

networks:
- config:
    ipv4.address: 10.55.99.1/24
    ipv4.nat: "false"
    ipv6.address: fd42:7c6b:bdb2:6333::1/64
    ipv6.nat: "true"
  description: ""
  managed: true
  name: vlan_99
  type: bridge
storage_pools:
- config:
    **source: disksdb3**
**    volatile.initial_source: disksdb3/proxmox_lxd_prod**
**    zfs.pool_name: disksdb3/proxmox_lxd_prod**
  description: ""
  name: local
  driver: zfs
profiles:
- config: {}
  description: ""
  devices: {}
  name: default

**Error: Failed to create storage pool 'local': invalid combination of "source" and "zfs.pool_name" property**

Any ideas where I’m going wrong?

I’m guessing it must not like the existing dataset, but not sure why as the name is matching the one on the cluster master.

Cheers,
Jon.

Outbound of zfs list

NAME               USED  AVAIL  REFER  MOUNTPOINT
disksdb3           336K  3.51T    96K  /disksdb3
rpool             10.2G   105G   104K  /rpool
rpool/ROOT        1.74G   105G    96K  /rpool/ROOT
rpool/ROOT/pve-1  1.74G   105G  1.54G  /
rpool/data          96K   105G    96K  /rpool/data
rpool/swap        8.50G   114G    56K  -

Interesting, if I leave it blank, I get another error complaining about the “/” in the pool name.

networks:

  • config:
    ipv4.address: 10.55.99.1/24
    ipv4.nat: "false"
    ipv6.address: fd42:7c6b:bdb2:6333::1/64
    ipv6.nat: "true"
    description: ""
    managed: true
    name: vlan_99
    type: bridge
    storage_pools:
  • config:
    source: ""
    volatile.initial_source: disksdb3/proxmox_lxd_prod
    zfs.pool_name: disksdb3/proxmox_lxd_prod
    description: ""
    name: local
    driver: zfs
    profiles:
  • config: {}
    description: ""
    devices: {}
    name: default

Error: Failed to create storage pool ‘local’: Failed to create the ZFS pool: cannot create ‘disksdb3/proxmox_lxd_prod’: invalid character ‘/’ in pool name

@freeekanayaka can you look at this?

Hi,

did you come up with the preseed YAML snippet you pasted by your own? Or did you copied it from the output of “lxd init” ran in interactive mode, which since 3.0.0 will offer you to print the preseed YAML which matches your interactive answers?

In the latter case, you probably got a broken YAML, since I discovered that there’s a bug in “lxd init” when run interactively for joining a cluster and the joining node has an existing zfs dataset. The fix for the bug will be released soon, but you should be able to solve the issue by tweaking your pressed and eliminating the “volatile.initial_source” and “zfs.pool_name” keys from the storage pool config. For example:

storage_pools:

  • config:
    source: disksdb3
    description: ""
    name: local
    driver: zfs

Also, I think you should make sure that the dataset is unmounted before running lxd init (I see you have it mounted under /disksdb3).

Regarding using “/” in pool names, that’s apparently not supported by zfs, e.g.:

root@lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e:/home/ubuntu# zpool create disksdb3/proxmox_lxd_prod /dev/vdc
cannot create ‘disksdb3/proxmox_lxd_prod’: invalid character ‘/’ in pool name
use ‘zfs create’ to create a dataset

I’m not sure what you’re trying to do there and why you want slashes in the dataset name.

Hi

Sorry the / was in the dataset name

are you talking about the zpool name?

I see the dataset as slicing up the zpool, so I was trying use an existing
dataset which has a slash in it as the root of the storage for the LXD
cluster. This is because I don’t want LXD to hog the whole Zpool and I can
use it for backups etc.

e.g.

zpool create disksdb3 /dev/sdb3

zfs create disksdb3/mylxdclusterdataset

so I was giving the dataset above with slash to LXD to use.

I am a ZFS newby so I might be talking rubbish. I’m a Network engineer by
trade :slight_smile:

Cheers,
Jon

Re: the YAML, it was the latter, its the output of the lxd init interactive process.

cheers,
Jon.

Okay if I understand correctly what you want to do, here follow the steps to create two nodes with a ZFS storage pool linked to a pre-existing disksdb3/mylxdclusterdataset dataset.

Bootstrap node:

root@lxd-fc4943a2-98e9-46cd-8ef8-6b58991a4101:/home/ubuntu# zpool create disksdb3 /dev/vdc
root@lxd-fc4943a2-98e9-46cd-8ef8-6b58991a4101:/home/ubuntu# zfs create disksdb3/mylxdclusterdataset
root@lxd-fc4943a2-98e9-46cd-8ef8-6b58991a4101:/home/ubuntu# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=lxd-fc4943a2-98e9-46cd-8ef8-6b58991a4101]: node1
What IP address or DNS name should be used to reach this node? [default=10.55.60.34]:
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again:
Do you want to configure a new local storage pool (yes/no) [default=yes]?
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: disksdb3/mylxdclusterdataset
Do you want to configure a new remote storage pool (yes/no) [default=no]?
Would you like to connect to a MAAS server (yes/no) [default=no]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 address should be used (CIDR subnet notation, auto or none) [default=auto]?
What IPv6 address should be used (CIDR subnet notation, auto or none) [default=auto]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like a YAML "lxd init" preseed to be printed [default=no]? yes
config:
  core.https_address: 10.55.60.34:8443
  core.trust_password: foo
cluster:
  server_name: node1
  enabled: true
  cluster_address: ""
  cluster_certificate: ""
  cluster_password: ""
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  managed: false
  name: lxdbr0
  type: ""
storage_pools:
- config:
    source: disksdb3/mylxdclusterdataset
  description: ""
  name: local
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: local
      type: disk
  name: default

root@lxd-fc4943a2-98e9-46cd-8ef8-6b58991a4101:/home/ubuntu#

Joining node:

root@lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e:/home/ubuntu# zpool create disksdb3 /dev/vdc
root@lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e:/home/ubuntu# zfs create disksdb3/mylxdclusterdataset
root@lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e:/home/ubuntu# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e]: node2
What IP address or DNS name should be used to reach this node? [default=10.55.60.66]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.55.60.34
Cluster certificate fingerprint: 4668b46fd423d483b462084d1bf2cd49eff4d747cfde3895151cd8c904935292
ok? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose the local disk or dataset for storage pool "local" (empty for loop disk): disksdb3/mylxdclusterdataset
Would you like a YAML "lxd init" preseed to be printed [default=no]? yes
config:
  core.https_address: 10.55.60.66:8443
cluster:
  server_name: node2
  enabled: true
  cluster_address: 10.55.60.34:8443
  cluster_certificate: |
    -----BEGIN CERTIFICATE-----
    MIIFpzCCA4+gAwIBAgIRAJ/fTeAoYtxdGNBn+QVZ3AYwDQYJKoZIhvcNAQELBQAw
    VjEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzE2MDQGA1UEAwwtcm9vdEBs
    eGQtZmM0OTQzYTItOThlOS00NmNkLThlZjgtNmI1ODk5MWE0MTAxMB4XDTE4MDQw
    NTA4MzEwMFoXDTI4MDQwMjA4MzEwMFowVjEcMBoGA1UEChMTbGludXhjb250YWlu
    ZXJzLm9yZzE2MDQGA1UEAwwtcm9vdEBseGQtZmM0OTQzYTItOThlOS00NmNkLThl
    ZjgtNmI1ODk5MWE0MTAxMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA
    uSq5qbbhAxz5kGBl81A94p4RftN06zGu/kDOLMFATkEhgsj0HgY4OaGD6U5Cyvy8
    elruq0wIN3oqG66UWNCK3vAZodhri5fceMEqZHpoE+K0xk2mHTHXPvqXeiNxLMLx
    3MnK4i0Yd1LKo5nvxfKOxs2eNsZoNBZmsk+9CUJd8MiLp8nsAIqp9hJdNetJJXzI
    MVq3JEKFBWtYBI57Wkg9RJ6xcncrAWcq/puJbo9LMRKj5EPrm0cn2fIB2XzjpDMO
    R/D79ux2bt5VUn+I/+SxcJz/ks+62wgnZSqSSsAMYBXuFBNvBXiiCdCldnCdP6MM
    m5PBwp1PPkyrccCQUZVBeCECJb+KOlLM4Tv16+UQ/wnpXRpIlcv0rNvoxYoK13aO
    ob+R596B5UU+fcW53eqJJFZwe3rBmCaTlkUWnRLDNNrAqtWPW88n+UW+XURC6/S4
    kMtK8ANy66flJS/jvQnp/42DOUrbceVa9zD4vUWq6JKaULJxSI8UWuR76ovM02Z/
    x5OwQVb5UWjkC+Udx3oKw8Bq3RktZs9DIRleqHVTa7cOxDP9Tnj4tznFzYTpa62A
    QTiovqAXj6R+zveRwka/tEb02FSKSsAqxQdcSOXacRmpyB2/puzz+MoawbD464sR
    LtHMhuyGFq64gIkqFRxNxGKg2CmmM0+Sw7DWxADin8cCAwEAAaNwMG4wDgYDVR0P
    AQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwOQYD
    VR0RBDIwMIIobHhkLWZjNDk0M2EyLTk4ZTktNDZjZC04ZWY4LTZiNTg5OTFhNDEw
    MYcECjc8IjANBgkqhkiG9w0BAQsFAAOCAgEAVBINEx/ub4EadvzMOSDhjO9p1I1B
    dc/I59mAAA7C6gfbjfAuxQbCx2ziIRvj8rEUiKy1jyFz7zF2MZtZNLIwQXGGwVAp
    AJ8czGnPRAnk+vCExZ+/MbdSZ2kbPdwe7eknV3G+kkpETFCokQFrn01w7KaZ44nM
    UwXjAJO3VlmByNq5QialgMOqvOMK+CUsruTrTjjG47iALmPZricNbRFHoy8uLQ15
    ffOUZSoD5yCjQYqxgph9EdCo4QDbFdbRNhgEfKdj6dkJcEJn8kJDGFWKFS6Hgf4g
    dHAPApIorFdMBD/zxScNV/dw9XEAX46SMre5AT7pavnX0Juc1kikO6br66sKy0lI
    szPNxmaUiK6b8JZi3jsaKg1dcnTyydrrHfC08UQcJoWeoGXxLlVaoaQJwst7WXc2
    KOMDB4TdRKRpRZm8aC+ud1DOvQnNiQdMRTSdRNG0vQzADBnwOjOyvPLXcm3f3K9R
    ZPwLAyxdg3k+yEzXrjnmQFh2GJ0Wz/Uf6qIW70J1D/LHW0ckY7K4RTMa5x6W65dt
    dAOWVXLvTaqMnKfFOB6/xfT/UYpddE4LLeJ86NqQAv4urLA9Mxadd/0WWl9vsKDr
    byvhtgd8xVfBMjJ1sIWb3Oi/KoOJcpcNZRDsZO530hXltSkxX3zWmpamPif9idoM
    74Yu0sDyB8t+NVk=
    -----END CERTIFICATE-----
  cluster_password: foo
networks:
- config:
    ipv4.address: 10.238.99.1/24
    ipv4.nat: "true"
    ipv6.address: fd42:43c0:d098:602f::1/64
    ipv6.nat: "true"
  description: ""
  managed: true
  name: lxdbr0
  type: bridge
storage_pools:
- config:
    source: disksdb3/mylxdclusterdataset
  description: ""
  name: local
  driver: zfs
profiles:
- config: {}
  description: ""
  devices: {}
  name: default

root@lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e:/home/ubuntu#

Please note that this assumes you’re using a build of LXD containing this fix:

If you can’t or don’t want to build lxd from master, I recommend waiting for next 3.0.1 release which will happen soon (stgraber might have a more concrete date in mind).