then I tried to initialized LXD on slave nodes to add them to existing cluster.
But when I get prompted of these questions, I don’t know what to enter
Choose the local disk or dataset for storage pool "local" (empty for loop disk):
Choose the local disk or dataset for storage pool "remote" (empty for loop disk):
Should I enter device name("/dev/sda*") in there? or should I give ZFS volume name?
I’ve tried many times but all trials gave me this message
Error: Failed to update storage pool 'local': node-specific config key source can't be changed
Could anyone explain me about concept of “local” and “remote” pools of LXD cluster in LXD 3.0
If you didn’t setup “remote” storage in the master node, it’s weird that you still got a remote storage pool created. Do you still have the console output of the “lxd init” command you ran on the master node? To see exactly how you answered to the various questions.
The YAML output would be fine too, if you opted for printing it. I meant the exact log of the questions and answers (so cut and paste from the terminal your “lxd init” session).
The remote storage does make sense only if you have a remote ceph storage that you want to use (@stgraber can you confirm this? these questions got a bit changed after the cobra merge and I’m not entirely sure either)
If you installed LXD via snap, you can just “snap remove lxd” and “snap install lxd” again to start from scratch.
If you used the deb, apt purge lxd and apt install lxd should do the same.
If you start fresh, you most probably don’t want to create a remote storage pool. If you can use the zfs as “local” (there will be one zfs dataset on each node). After you built the cluster you can add more storage pools (e.g. LVM if you wish).
Ok, so now when you join a node you should be able to select “/dev/sda6” (or whatever other device the node has) when you get asked about the local storage.
Yes, you should type /dev/sda6 for the local pool question.
Regarding the remote storage pool, it feels like a recently introduced bug, since you answered NO in the lxd init question on the master and still asks about it on the joining node. @stgraber does it ring any bell?
As said, my impression is that this is a regression, so perhaps a fix is needed in the source code. Stephane who knows a bit more about this is currently on holiday and I’m about to end my day, but we’ll follow up as soon as we can.
To work around the problem, you can try to prepare a preseed.yaml file and use “lxd init --preseed < preseed.yaml” instead of “lxd init”, for both the master and the slaves. That will run in non-interactive mode and skip the questions.
There is some documentation about how to do that here:
and you can use the YAML output that you already have as starting point for the master node (that should work perfectly if you start over and create the master node with that). The YAML for the slave nodes will be similar but you’ll need to add a few more details like the TLS certificate, master IP and password (see the document I linked).
Where did you install LXD from? snap or deb? and if you installed it with snap, which channel did you use? (if you don’t specify a channel the default is to use the stable channel).
I just tried to reproduce your issue with the lastest lxd deb in Ubuntu 18.04, with snap from the stable channel and with snap with the edge channel. In all cases I didn’t get the issue of the remote pool being created if answered “no”, and joining nodes worked just fine (and I was asked only for the local pool when joining).
I tried to matched your config, the YAML output from lxd init on the master node in all three cases was:
So I retract what I said before, there doesn’t seem to be a regression. Perhaps there’s something specific in your system (some LVM setup?) that makes this bug trigger.