I just installed Incus version 6.0 on Ubuntu Server 24.04 LTS, running on (4) Raspberry Pi 5 nodes. I also installed Microceph on the same 4 nodes.
I am now attempting to create the incus cluster.
When I run incus admin init, I go through the prompts, but when it asks whether I want to configure a new remote storage pool, and I say yes, the next question is “Create a new LVMCLUSTER pool?” It never prompts me on name of storage backend to use (i.e., ceph, cephfs, etc.).
When I check on microceph.ceph status, I get the following:
In any case, you can always say “no” and add the ceph pool later. incus admin init is really just a shortcut for creating an initial network, profile and storage. If the default pool uses local storage (e.g. dir / lvm / zfs) and you add a ceph pool later, you can edit the default profile so that any new containers/VMs you create use ceph by default.
2.) When I run command: incus storage create remote ceph source=incus-ceph I get an error: Error: Config key "source" is cluster member specific.
3.) So I read your documentation and came to understand that when creating a storage pool in a cluster, you must create the storage pool for each cluster member separately. I then ran this command on node 1 and got another error message: root@rpicluster01:~# incus storage create remote ceph source=incus-ceph --target=rpicluster01 Error: Required tool 'ceph' is missing
Incus looks for available storage drivers on startup, so you may need to systemctl restart incus to have it pick up the presence of the ceph and rbd commands.
5.) I ran command incus storage create remote ceph source=incus-ceph --target=rpicluster01 on each node
6.) I ran systemctl restart incus , and for good measure, rebooted the system.
7.) I then run command below and get another error message:
root@rpicluster01:~# incus storage create remote ceph
Error: Failed checking the existence of the ceph "incus-ceph" osd pool while attempting to create it because of an internal error: Failed to run: ceph --name client.admin --cluster ceph osd pool get incus-ceph size: exit status 1 (Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)'))
Make sure all machines in your cluster have ceph and rbd available, that they all have a /etc/ceph/ceph.conf config file and /etc/ceph/client.admin.keyring keyring file.