Unable to add ceph as a remote storage pool when creating Incus cluster

Hello,

I just installed Incus version 6.0 on Ubuntu Server 24.04 LTS, running on (4) Raspberry Pi 5 nodes. I also installed Microceph on the same 4 nodes.

I am now attempting to create the incus cluster.

When I run incus admin init, I go through the prompts, but when it asks whether I want to configure a new remote storage pool, and I say yes, the next question is “Create a new LVMCLUSTER pool?” It never prompts me on name of storage backend to use (i.e., ceph, cephfs, etc.).

When I check on microceph.ceph status, I get the following:

  cluster:
    id:     67a91241-fdc2-4449-9b65-31824f40ffbc
    health: HEALTH_OK
 
  services:
    mon: 4 daemons, quorum rpicluster01,rpicluster02,rpicluster03,rpicluster04 (age 7m)
    mgr: rpicluster01(active, since 9m), standbys: rpicluster02, rpicluster04, rpicluster03
    osd: 3 osds: 3 up (since 5m), 3 in (since 5m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   81 MiB used, 86 GiB / 86 GiB avail
    pgs:     1 active+clean

My question is: why can’t I choose ceph as remote storage backend?

What does incus info show under storage_supported_drivers? Are the commands “ceph” and “rbd” available in your PATH?

See this post for more info: Unable to migrate LXD 5.21 with microceph to Incus 6.0 - #2 by stgraber

In any case, you can always say “no” and add the ceph pool later. incus admin init is really just a shortcut for creating an initial network, profile and storage. If the default pool uses local storage (e.g. dir / lvm / zfs) and you add a ceph pool later, you can edit the default profile so that any new containers/VMs you create use ceph by default.

Thank you for your quick response! Following show under storage_supported_drivers:

  storage_supported_drivers:
  - name: zfs
    version: 2.2.2-0ubuntu9
    remote: false
  - name: btrfs
    version: 6.6.3
    remote: false
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.48.0
    remote: false

Yes, the “ceph” and “rbd” commands are in my PATH. I’ll continue to research via the link you provided. Thank you so much.

Quick update.

1.) I created a new OSD pool “incus-ceph”. Details are shown below:

root@rpicluster01:~# ceph osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 3.00
pool 2 'incus-ceph' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 52 lfor 0/0/48 flags hashpspool stripe_width 0 application rbd read_balance_score 1.41

2.) When I run command: incus storage create remote ceph source=incus-ceph I get an error: Error: Config key "source" is cluster member specific.

3.) So I read your documentation and came to understand that when creating a storage pool in a cluster, you must create the storage pool for each cluster member separately. I then ran this command on node 1 and got another error message:
root@rpicluster01:~# incus storage create remote ceph source=incus-ceph --target=rpicluster01 Error: Required tool 'ceph' is missing

4.) I then installed ceph-common.

Can you help me understand this error message?

Incus looks for available storage drivers on startup, so you may need to systemctl restart incus to have it pick up the presence of the ceph and rbd commands.

Thank you @stgraber. Have another update:

Following on from step 4 above:

5.) I ran command incus storage create remote ceph source=incus-ceph --target=rpicluster01 on each node

6.) I ran systemctl restart incus , and for good measure, rebooted the system.

7.) I then run command below and get another error message:

root@rpicluster01:~# incus storage create remote ceph
Error: Failed checking the existence of the ceph "incus-ceph" osd pool while attempting to create it because of an internal error: Failed to run: ceph --name client.admin --cluster ceph osd pool get incus-ceph size: exit status 1 (Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)'))

Am I getting closer?

Make sure all machines in your cluster have ceph and rbd available, that they all have a /etc/ceph/ceph.conf config file and /etc/ceph/client.admin.keyring keyring file.

Thank you, @stgraber. That solved my problem. Thanks again! :slight_smile: