I’m trying to deploy VM on my RPI4 cluster with Incus 6.3 and I’m getting the error:
$ incus start test-vm --project test --console
Error: Failed setting up device via monitor: Failed adding block device for disk device "root": Failed adding block device: error reading conf file /etc/ceph/ceph.conf: No such file or directory
Try `incus info --show-log test-vm` for more info
Below is configuration:
Instance config:
$ incus config show --expanded test-vm --project test
architecture: aarch64
config:
image.architecture: arm64
image.description: Alpine edge arm64 (20240715_13:01)
image.os: Alpine
image.release: edge
image.requirements.secureboot: "false"
image.serial: "20240715_13:01"
image.type: disk-kvm.img
image.variant: default
security.secureboot: "false"
volatile.apply_template: create
volatile.base_image: 2f8163c0b5f6b63cf28d2048991ae055aed222ae5c29d0ce1645aa7bf647c41f
volatile.cloud-init.instance-id: 34c6de9a-2b4b-42f6-b5f8-a8edce2744b1
volatile.uuid: 791f8fad-460a-46e6-9ae8-1ccdcf1e74f2
volatile.uuid.generation: 791f8fad-460a-46e6-9ae8-1ccdcf1e74f2
devices:
root:
path: /
pool: test_remote
type: disk
ephemeral: false
profiles:
- stor.remote
stateful: false
description: ""
Profile config:
$ incus profile show stor.remote --project test
config: {}
description: ""
devices:
root:
path: /
pool: test_remote
type: disk
name: stor.remote
used_by:
- /1.0/instances/test-pki?project=test
- /1.0/instances/test-vault?project=test
- /1.0/instances/test-vm?project=test
project: test
Storage config:
$ incus storage show test_remote
config:
ceph.cluster_name: ceph
ceph.osd.pg_num: "32"
ceph.osd.pool_name: test_remote
ceph.user.name: admin
volatile.pool.pristine: "true"
description: ""
name: test_remote
driver: ceph
used_by:
- /1.0/images/28fcfdfbd2a76019a1b9db93c4aeeafda091cf6610dc31b1815b48b9e38adcb5
- /1.0/images/2f8163c0b5f6b63cf28d2048991ae055aed222ae5c29d0ce1645aa7bf647c41f
- /1.0/images/65e7ee6e5f32f92f1d2acc0921cab309bce39f2400cce1580267323e66c3f4ac
- /1.0/images/cff1686997a3f1313e910f402992cc6c6b421a7931fa4465e29e68da807785ac
- /1.0/instances/test-pki?project=test
- /1.0/instances/test-vault?project=test
- /1.0/instances/test-vm?project=test
- /1.0/profiles/app.ad-home-dc?project=test
- /1.0/profiles/app.dhcp?project=test
- /1.0/profiles/app.dns?project=test
- /1.0/profiles/app.svk-tun?project=test
- /1.0/profiles/default?project=test
- /1.0/profiles/stor.remote?project=test
status: Created
locations:
- cl-05
- cl-06
- cl-07
- cl-01
- cl-02
- cl-03
- cl-04
ceph-common
package is installed in addition to microceph
package
$ apt list --installed ceph*
Listing... Done
ceph-common/noble,now 19.2.0~git20240301.4c76c50-0ubuntu6 arm64 [installed]
/etc/ceph/ceph.conf' file is linked to
microceph’ config file and readable for every user:
ls -l /etc/ceph/ceph.conf
lrwxrwxrwx 1 root root 42 Jun 19 23:08 /etc/ceph/ceph.conf -> /var/snap/microceph/current/conf/ceph.conf
/etc/seph/ceph.conf content:
# # Generated by MicroCeph, DO NOT EDIT.
[global]
run dir = /var/snap/microceph/982/run
fsid = 8aa2cf2d-582d-42f1-aed1-c7d92b945cec
mon host = xxx.xxx.xxx.xx1,xxx.xxx.xxx.xx2,xxx.xxx.xxx.xx3,xxx.xxx.xxx.xx4,xxx.xxx.xxx.xx5,xxx.xxx.xxx.xx6,xxx.xxx.xxx.xx7
public_network = xxx.xxx.xxx.xx7/23
auth allow insecure global id reclaim = false
ms bind ipv4 = true
ms bind ipv6 = false
[client]
Any containers were deployed and run on that storage successfully. However, VM does not start.