How to cluster the lxc container from a machine to another machine

Hi everyone, am trying to setup a mirror concept or cluster to take all the data from one machine to another machine. Is it possible? if possible how to execute it?

Thanks,
Joseph

I’m not sure to understand your question. Please can you describe your use case or end goal and what you are trying to achieve?

  1. System 1 has lxd container running i want to take a mirror or cluster of the system 1 to another system2.
  2. And What are the input given in system1 should be captured to system2.
    is it possible to acheive?

By “mirror” do you mean a copy of the underlying container image? You can’t have the same running container live on both system 1 and system 2, at most you can have a copy of the container image (i.e. the root file system).

Yeah. I don’t need both the system to be live, if system 1 is live(master) then system2 will be as backup(slave). once system1 fails at that time system2 should become live.

You can do this by using a LXD storage pool backed by ceph. You should use at least 3 machines though, (for the ceph cluster and for the lxd cluster), otherwise if you lose or even just shutdown 1 machine your lxd cluster and storage will be unavailable.

ok. Incase if i use 3 systems for cluster, shutdown system1, then system 2 and 3 will be live automatically or these also will also be unavailable?
How ceph cluster works?

If you shutdown system1 and your container lives in system1, the container will be stopped and will not be automatically migrated to system2 or system3. However, since the container’s rootfs is replicated by ceph on both system2 and system3 you will be able to manually move the container from system1 to system2 or system3 even if system1 is offline, and than you can restart the container on system2 or system3. You can also script this logic according to your own rules.

yeah manually also fine. can you share the procedures for ceph cluster?
Do you know how to find the cluster password in lxd?

You’ll first need to create the Ceph cluster, then create a LXD storage pool in your LXD cluster, e.g.:

lxc storage create my-pool ceph source=my-ceph-pool --target system1
lxc storage create my-pool ceph source=my-ceph-pool --target system2
lxc storage create my-pool ceph source=my-ceph-pool --target system3
lxc storage create my-pool ceph volume.size=XXXGB ceph.osd.pg_num=1

tweaking the various parameters as you wish. You can also create the LXD ceph pool when you first create the LXD cluster, using the lxd init wizard.

Thank you will check and tell. How can i find the cluster trust password in lxd init?

Try lxd sql global "SELECT * FROM config WHERE key='core.trust_password'".

it helped me to find.
while trying to init cluster am getting the following error:
root@ah:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=ah]:
What IP address or DNS name should be used to reach this node? [default=192.168.1.43]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 192.168.1.18
Cluster fingerprint: ************************
You can validate this fingerprint by running “lxc info” locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Error: Failed to setup trust relationship with cluster: failed client cert to cluster: not authorized

what would be the reason for getting this not authorised error.