I have installed a CEPH cluster with 4 nodes. I then tried to initiate a LXD in one of the node “lxd init”. Here are the settings:
- use LXD clustering: yes
- joining an existing cluster: no
- configure a new local storage pool: no
- configure a new remote storage pool: yes
- storage backend: ceph
- create a new CEPH pool: yes
- Name of existing CEPH cluster: ceph
- Name of the OSD storage pool: lxd
- Number of placement groups: 32
It hang after the last question about should the YAML “lxd init” be printed.
I stopped the init process and run “lxc storage list”. No pool was created.
Here is the output of “ceph -s”
cluster:
id: 31dde3e0-65ed-4ac3-b48d-a051541199e1
health: HEALTH_WARN
Reduced data availability: 164 pgs inactive
Degraded data redundancy: 164 pgs undersized
services:
mon: 4 daemons, quorum node1,node2,node3,node4 (age 104m)
mgr: node2(active, since 113m), standbys: node3,node4,node1
mds: 4 up:standby
osd: 8 osds: 8 up (since 109m), 8 in (since 9h); 1 remapped pgs
data:
pools: 4 pools, 165 pgs
objects: 0 objects, 0 B
usage: 8.1 GiB used, 13 TiB / 13 TiB avail
pgs: 99.394% pgs not active
164 undersized+peered
1 active+clean+remapped
Please help. Thanks!