I have a fresh 3 node Ceph cluster setup with CephFS created. Following are the cluster details:
$ sudo ceph -s
cluster:
id: 250f2880-9203-11eb-b6c2-013097c225cb
health: HEALTH_WARN
Degraded data redundancy: 7/66 objects degraded (10.606%), 4 pgs degraded, 16 pgs undersized
services:
mon: 3 daemons, quorum node1,node2,node3 (age 40m)
mgr: node1.vtgvss(active, since 31m), standbys: node2.yweyet
mds: lxd-storage:1 {0=lxd-storage.node3.vuxquu=up:active} 1 up:standby
osd: 3 osds: 3 up (since 34m), 3 in (since 34m)
data:
pools: 3 pools, 65 pgs
objects: 22 objects, 7.9 KiB
usage: 3.0 GiB used, 11 TiB / 11 TiB avail
pgs: 7/66 objects degraded (10.606%)
49 active+clean
12 active+undersized
4 active+undersized+degraded
$ sudo ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 11 TiB 11 TiB 8.6 MiB 2.0 GiB 0.02
ssd 346 GiB 345 GiB 284 KiB 1.0 GiB 0.29
TOTAL 11 TiB 11 TiB 8.9 MiB 3.0 GiB 0.03
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 3.5 TiB
cephfs.lxd-storage.meta 2 32 6.2 KiB 22 1.0 MiB 0 3.9 TiB
cephfs.lxd-storage.data 3 32 0 B 0 0 B 0 3.5 TiB
All nodes ares using LXD 4.11 .
I created the remote LXD cluster storage pool through lxd init
as follows:
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=node1]:
What IP address or DNS name should be used to reach this node? [default=192.168.1.110]:
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again:
Do you want to configure a new local storage pool? (yes/no) [default=yes]: no
Do you want to configure a new remote storage pool? (yes/no) [default=no]: yes
Name of the storage backend to use (ceph, cephfs) [default=ceph]: cephfs
Create a new CEPHFS pool? (yes/no) [default=yes]: no
Name of the existing CEPHFS pool or dataset: lxd-storage
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like to create a new Fan overlay network? (yes/no) [default=yes]:
What subnet should be used as the Fan underlay? [default=auto]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
The LXD cluster looks like the following:
$ lxc cluster list
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
| NAME | URL | DATABASE | STATE | MESSAGE | ARCHITECTURE | FAILURE DOMAIN |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
| node1 | https://192.168.1.110:8443 | YES | ONLINE | fully operational | x86_64 | default |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
| node2 | https://192.168.1.176:8443 | YES | ONLINE | fully operational | x86_64 | default |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
| node3 | https://192.168.1.193:8443 | YES | ONLINE | fully operational | x86_64 | default |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
The CephFS storage pool seems to have been created correctly:
$ lxc storage show remote
config:
cephfs.cluster_name: ceph
cephfs.path: lxd-storage
cephfs.user.name: admin
description: ""
name: remote
driver: cephfs
used_by:
- /1.0/instances/test-container
- /1.0/instances/ubuntu-container
- /1.0/profiles/default
- /1.0/profiles/vm
status: Created
locations:
- node1
- node2
- node3
However when creating an instance, the following error occurs:
$ lxc init ubuntu:20.04 test-container
Creating test-container
Error: Failed instance creation: Load instance storage pool: Not implemented
The instance is however still listed under lxc list
$ lxc list
+------------------+---------+------+------+-----------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+------------------+---------+------+------+-----------+-----------+----------+
| test-container | STOPPED | | | CONTAINER | 0 | node2 |
+------------------+---------+------+------+-----------+-----------+----------+
| ubuntu-container | STOPPED | | | CONTAINER | 0 | node1 |
+------------------+---------+------+------+-----------+-----------+----------+
I am unable to delete the storage pool to reconfigure as it is being used, however I am also unable to delete the instances:
$ lxc delete test-container
Error: Not implemented
Any tips are appreciated.