Take a look at [1], where Stéphane suggested the following:
The way I do it is I make sure that I’m dealing with at least 3 systems (otherwise consensus for both LXD and Ceph will be a problem), then pick 3 to host both OSD and Ceph services (MON, MGR, MDS) with the others just running OSD on their disks.
And
You need to account for the extra memory and CPU usage that comes from running those services, but it’s usually okay to do it that way, eliminates the need for dedicated storage hardware and lets you spread your storage across more systems.
Althoung I’m a big proponent of using standard Ubuntu/Debian packages I ended up to install with this in /etc/apt/sources.list.d/ceph.list
. In hindsight it is probably not necessary.
deb https://download.ceph.com/debian-octopus focal main
First I installed docker.io
because otherwise cephadm
will complain or install it on its own.
apt install docker.io
systemctl enable docker
systemctl start docker
Then I started the ceph cluster with
mkdir -p /etc/ceph
cephadm bootstrap --mon-ip 172.16.16.45 --ssl-dashboard-port 9443
Without specifying the dashboard port there is a conflict with 8443 already in use by LXD.
Next, install ceph-common
on the next ceph node. After that, on the first ceph node you can add the new node as follows (where second-node
is the hostname of the new node.
ceph orch host add second-node
And so on for the other ceph nodes
Finally, add an OSD for each disk (/dev/sdc
in this example)
ceph orch daemon add osd second-node:/dev/sdc