So I have tried setting up a cluster again, this time with a 3 node cluster, I have set ceph.external
and reloaded the daemon on all 3 nodes. The Ceph cluster looks as follows:
$ ceph -s
cluster:
id: 92031cf6-bf96-11eb-a07c-5b3f8f9b90b4
health: HEALTH_WARN
Degraded data redundancy: 10 pgs undersized
services:
mon: 3 daemons, quorum node1,node2,node3 (age 6m)
mgr: node1.yvpsor(active, since 32m), standbys: node2.zsvbgi
osd: 3 osds: 3 up (since 6m), 3 in (since 6m)
data:
pools: 2 pools, 33 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 11 TiB / 11 TiB avail
pgs: 23 active+clean
10 active+undersized
$ sudo ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 7.17670 1.00000 7.2 TiB 1.0 GiB 3 MiB 0 B 1 GiB 7.2 TiB 0.01 0.51 33 up
1 hdd 3.53419 1.00000 3.5 TiB 1.0 GiB 3 MiB 0 B 1 GiB 3.5 TiB 0.03 1.04 33 up
2 ssd 0.33800 1.00000 346 GiB 1.0 GiB 192 KiB 0 B 1 GiB 345 GiB 0.29 10.88 23 up
TOTAL 11 TiB 3.0 GiB 6.2 MiB 0 B 3 GiB 11 TiB 0.03
MIN/MAX VAR: 0.51/10.88 STDDEV: 0.15
$ sudo ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 11 TiB 11 TiB 6.1 MiB 2.0 GiB 0.02
ssd 346 GiB 345 GiB 200 KiB 1.0 GiB 0.29
TOTAL 11 TiB 11 TiB 6.3 MiB 3.0 GiB 0.03
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 3.5 TiB
lxd-storage 2 32 0 B 0 0 B 0 3.5 TiB
I have tried running lxd init
both through a preseed file and through the CLI prompt and both seem to get stuck when attempting to initiate the cluster.
Following is the input given on lxd init
:
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=node1]:
What IP address or DNS name should be used to reach this node? [default=192.168.6.10]: 192.168.1.110
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again:
Do you want to configure a new local storage pool? (yes/no) [default=yes]: no
Do you want to configure a new remote storage pool? (yes/no) [default=no]: yes
Name of the storage backend to use (ceph, cephfs) [default=ceph]: ceph
Create a new CEPH pool? (yes/no) [default=yes]: no
Name of the existing CEPH cluster [default=ceph]:
Name of the existing OSD storage pool [default=lxd]: lxd-storage
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like to create a new Fan overlay network? (yes/no) [default=yes]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config:
core.https_address: 192.168.1.110:8443
core.trust_password: secret
networks: []
storage_pools:
- config:
ceph.cluster_name: ceph
ceph.osd.pool_name: lxd-storage
source: lxd-storage
description: ""
name: remote
driver: ceph
profiles:
- config: {}
description: ""
devices:
root:
path: /
pool: remote
type: disk
name: default
projects: []
cluster:
server_name: node1
enabled: true
member_config: []
cluster_address: ""
cluster_certificate: ""
server_address: ""
cluster_password: ""
### lxd blocks here and does not return ###
Has anyone else experienced this behaviour?