You should be able to move them by first stopping them and then using lxc move c1 c1b --target vm01.
This will move the container from wherever it is to vm01 and rename it to c1b. The rename step is currently required for non-ceph containers but we hope to address this soon.
I tried to reproduce your setup (as far as I understood it):
created 3 Ubuntu 18.04 (daily build) VMs
installed the lxd snap on all of them (3.0.0 from the stable channel [0])
run lxd init on the first instance, using all defaults, except for storage [1]
run lxd init on the second and third instance,making them join the first [2]
lxc cluster list, lxc list, lxc storage show local --target were all fine
rebooted all machines, one at a time, and all of them were functional after reboot
We have been fixing a few bugs in the stable-3.0 branch after the initial release, and those fixes are automatically pushed to the snap store. Perhaps you somehow got bitten by one of those bugs? (although I admit I can’t find any hard correlation between the symptoms you provided and those fixes).
If that’s feasible for you, I’d recommend to start fresh, take note exactly of what you did and when you did it, and if you hit any issue come back to us.
[0]
snap ID J60k4JY0HppjwOjW8dZdYc8obXKxujRu, which I believe matches the latest commit in our stable-3.0 branch (rev 24c3713d7). @stgraber?
[1]
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=lxd-fc4943a2-98e9-46cd-8ef8-6b58991a4101]: node1
What IP address or DNS name should be used to reach this node? [default=10.55.60.34]:
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again:
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]: yes
Path to the existing block device: /dev/vdc
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
[2]
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=lxd]: node3
What IP address or DNS name should be used to reach this node? [default=10.55.60.242]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.55.60.34
Cluster fingerprint: 32da832b3def5d2264067bf1abfd87f6b419cea7810ea2010e1d524007e6d040
You can validate this fingerpring by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose the local disk or dataset for storage pool "local" (empty for loop disk): /dev/vdc
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
I tried with an existing bridge as well. Bootstrap node:
root@lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e:~# ip link add br0 type bridge
root@lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=lxd-4fa8fddc-8dd8-4453-9777-41d607698c6e]: node1
What IP address or DNS name should be used to reach this node? [default=10.55.60.66]:
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again:
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]: yes
Path to the existing block device: /dev/vdc
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: br0
Is this interface connected to your MAAS server? (yes/no) [default=yes]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Joining nodes:
root@lxd:~# ip link add br0 type bridge
root@lxd:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=lxd]: node2
What IP address or DNS name should be used to reach this node? [default=10.55.60.242]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.55.60.66
Cluster fingerprint: 02893c0a683c7c14cbaad45b65838d3cf649adc58ca6178e22ae731e2991c33d
You can validate this fingerpring by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose the local disk or dataset for storage pool "local" (empty for loop disk): /dev/vdc
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Interesting. Can you replicate across multiple VM hosts?
On ceph, tuts I’ve found seem to suggest multiple machines are required separate from the LXD hosts. Is this the case or is there an LXD-biased guide you know of?