Cephfs - error storage volume not found error?

I am trying to learn to use cephfs and when I attach the profile with the cephfs volume to an instance I get the error
Error: Failed add validation for device “foo”: Failed loading custom volume: Storage volume not found

What might I be missing? I think I am close …

I am using ceph installed via cephadm and docker. ceph is working to create a container.

type or paste code here
snap, set, lxd, ceph.external=true
systemctl reload snap.lxd.daemon 

ceph osd pool create persist-cephfs_data 8
ceph osd pool create persist-cephfs_metadata 8
ceph fs new persist-cephfs persist-cephfs_metadata persist-cephfs_data
ceph fs set persist-cephfs allow_new_snaps true
ceph orch apply mds persist-cephfs --placement="3 v1 v2 v3"

lxc storage create remotefs cephfs source=persist-cephfs/foo --target=v1
lxc storage create remotefs cephfs source=persist-cephfs/foo --target=v2
lxc storage create remotefs cephfs source=persist-cephfs/foo --target=v3
lxc storage create remotefs cephfs 

lxc profile create foo
Profile foo created
lxc profile device add foo foo disk pool=remote-fs source=foo path=/foo
Device foo added to foo
lxc profile add c1 foo
Error: Failed add validation for device "foo": Failed loading custom volume: Storage volume not found

ceph -s
  cluster:
    id:     21c5bdf8-8f98-11ed-8097-2fa892f7752f
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum v1,v2,v3 (age 3h)
    mgr: v1.efyabb(active, since 3h), standbys: v2.cplcdk
    mds: 1/1 daemons up, 2 standby
    osd: 9 osds: 9 up (since 3h), 9 in (since 3h)
 
  data:
    volumes: 1/1 healthy
    pools:   5 pools, 105 pgs
    objects: 300 objects, 961 MiB
    usage:   3.2 GiB used, 897 GiB / 900 GiB avail
    pgs:     105 active+clean


ceph fs status
persist-cephfs - 3 clients
==============
RANK  STATE             MDS                ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  persist-cephfs.v3.wgqopp  Reqs:    0 /s    16     16     15      9   
          POOL             TYPE     USED  AVAIL  
persist-cephfs_metadata  metadata   336k   283G  
  persist-cephfs_data      data       0    283G  
      STANDBY MDS         
persist-cephfs.v1.whxauz  
persist-cephfs.v2.szaiwo  
MDS version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)


ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME      STATUS  REWEIGHT  PRI-AFF
-1         0.87918  root default                            
-7         0.29306      host v1                             
 2    hdd  0.09769          osd.2      up   1.00000  1.00000
 5    hdd  0.09769          osd.5      up   1.00000  1.00000
 8    hdd  0.09769          osd.8      up   1.00000  1.00000
-5         0.29306      host v2                             
 0    hdd  0.09769          osd.0      up   1.00000  1.00000
 3    hdd  0.09769          osd.3      up   1.00000  1.00000
 6    hdd  0.09769          osd.6      up   1.00000  1.00000
-3         0.29306      host v3                             
 1    hdd  0.09769          osd.1      up   1.00000  1.00000
 4    hdd  0.09769          osd.4      up   1.00000  1.00000
 7    hdd  0.09769          osd.7      up   1.00000  1.00000


ceph osd pool ls
lxd
.mgr
persist-cephfs_data
persist-cephfs_metadata
remote


 lxc version
Client version: 5.9
Server version: 5.9

You have created a storage pool, but no custom volume in that pool. That is why you cannot attach it to a profile (and instances).

Also the name of the pool you’re using in the lxc profile device add command is remote-fs but the name of the pool you created is remotefs. These must match.

Try:

lxc storage volume create remotefs foo
lxc profile device add foo foo disk pool=remotefs source=foo path=/foo

That worked … thank you … I missed that step in the videos I had been watching … or maybe it was not there … either way this works.

1 Like