Hi,
Iâve ran into the same issue here.
How I configured the cephfs storage
lxc storage create pool1 cephfs source=cephfs
After configuring, storage shows up properly
lxc storage ls
+-------+--------+----------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+-------+--------+----------------------------------+-------------+---------+---------+
| local | dir | /var/lib/lxd/storage-pools/local | | 2 | CREATED |
+-------+--------+----------------------------------+-------------+---------+---------+
| pool1 | cephfs | cephfs | | 0 | CREATED |
+-------+--------+----------------------------------+-------------+---------+---------+
and is mounted correctly
df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 205M 0 205M 0% /dev
tmpfs tmpfs 46M 684K 46M 2% /run
/dev/sda1 ext4 9.3G 4.3G 4.5G 49% /
tmpfs tmpfs 229M 0 229M 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 229M 28K 229M 1% /var/lib/ceph/osd/ceph-0
tmpfs tmpfs 46M 0 46M 0% /run/user/1000
tmpfs tmpfs 100K 0 100K 0% /var/lib/lxd/shmounts
tmpfs tmpfs 100K 0 100K 0% /var/lib/lxd/devlxd
10.0.0.87:6789,10.0.0.88:6789,10.0.0.89:6789:/ ceph 24G 0 24G 0% /var/lib/lxd/storage-pools/pool1
Listing the /var/lib/lxd/storage-pools/pool1 shows me the folders Iâve created separately
ls -l /var/lib/lxd/storage-pools/pool1
total 0
drwx--x--x 2 root root 0 Aug 28 14:54 custom
drwx--x--x 2 root root 0 Aug 28 14:54 custom-snapshots
drwxr-xr-x 2 1000000 1000000 0 Aug 28 15:22 mnt
drwxr-xr-x 2 1000000 1000000 0 Aug 28 15:22 settings
Then I configure them
lxc -v config device add "tv" "settings" disk source=cephfs:pool1/settings path="/etc/settings/"
But when trying to initialize the container, the same error occurs
lxc start tv
Error: Failed to start device "settings": Unable to mount "10.0.0.87:6789,10.0.0.88:6789,10.0.0.89:6789:/settings" at "/var/lib/lxd/devices/tv/disk.settings.etc-settings-" with filesystem "ceph": no route to host
Should I add the disk source as /var/lib/lxd/storage-pools/pool1 instead of cephfs? That does look an alternative, but whatâs the proper way to do it here?
My ceph conf file Iâm using [global]# specify cluster network for monitoringcluster network = %s/24# s - Pastebin.com
Thank you,
Nuno