Trouble adding custom block disk device

Hi all,

I will start this post by describing what my attempt is, as well as describe my environment.
I read:
https://linuxcontainers.org/lxd/docs/latest/reference/devices_disk/
https://linuxcontainers.org/lxd/docs/latest/howto/storage_volumes/
https://linuxcontainers.org/lxd/docs/latest/reference/devices_unix_block/
I have a 3 nodes lxd cluster with ceph storage pool intialised via microceph and set up in lxd the following way:

mother@infra1:~$ lxc storage create remote ceph --target infra1
Storage pool remote pending on member infra1
mother@infra1:~$ lxc storage create remote ceph --target infra2
Storage pool remote pending on member infra2
mother@infra1:~$ lxc storage create remote ceph --target infra3
Storage pool remote pending on member infra3
mother@infra1:~$ lxc storage create remote ceph ceph.cluster_name=ceph ceph.osd.pool_name=lxd
Storage pool remote created
mother@infra1:~$ lxc staorage ls
 Error: unknown command "staorage" for "lxc"

Did you mean this?
	storage

mother@infra1:~$  lxc storage ls
+--------+--------+-------------+---------+---------+
|  NAME  | DRIVER | DESCRIPTION | USED BY |  STATE  |
+--------+--------+-------------+---------+---------+
| local  | zfs    |             | 1       | CREATED |
+--------+--------+-------------+---------+---------+
| remote | ceph   |             | 0       | CREATED |
+--------+--------+-------------+---------+---------+

I would like to add a custom storage volume to one of my containers/vms in that pool in a custom path.
I am trying to start juju openstack charm and ideally would like to add a block device to a container with ceph osd installed so that ceph can use this block device(I read unix-block documentation but I do not have the ceph custom volume path I could use as source)
I do not mind having ceph osd run as vms if the first option is currently impossible. I assume one has to create a custom volume with --type block and then attach it to the vm( as described in how to manage storage). I also assume that lxc config device add and lxc storage volume attach are doing basically the same(if not @tomp please correct me :wink: )

Here is what I tried so far:

mother@infra1:~$ lxc storage volume create remote cephdrive size=55GB
Storage volume cephdrive created
mother@infra1:~$ lxc config device add juju-51f46b-23 cephdrive disk pool=remote source=cephdrive path=/srv/osd
Error: Failed to start device "cephdrive": Cannot attach directory while instance is running
mother@infra1:~$ lxc stop juju-51f46b-23
mother@infra1:~$ lxc config device add juju-51f46b-23 cephdrive disk pool=remote source=cephdrive path=/srv/osd
Device cephdrive added to juju-51f46b-23
mother@infra1:~$ lxc start juju-51f46b-23
mother@infra1:~$ lxc stop juju-51f46b-23
mother@infra1:~$ lxc config device remove juju-51f46b-23 cephdrive
Device cephdrive removed from juju-51f46b-23
mother@infra1:~$ lxc storage volume delete remote cephdrive
Storage volume cephdrive deleted
mother@infra1:~$ lxc storage volume create remote cephdrive size=60GB --type block
Storage volume cephdrive created
mother@infra1:~$ lxc storage volume attach remote cephdrive juju-51f46b-23 ceph-block
Error: Failed add validation for device "cephdrive": Custom block volumes cannot have a path defined
mother@infra1:~$ lxc storage volume attach remote cephdrive juju-51f46b-23
mother@infra1:~$ lxc start juju-51f46b-23
Error: Failed setting up device via monitor: Failed adding block device for disk device "cephdrive": Failed adding block device: error reading header from custom_juju_cephdrive.block: No such file or directory
Try `lxc info --show-log juju-51f46b-23` for more info
mother@infra1:~$ lxc config device remove juju-51f46b-23 cephdrive
Device cephdrive removed from juju-51f46b-23
mother@infra1:~$ lxc config device add juju-51f46b-23 cephdrive_man disk source=ceph:remote/cephdrive ceph.user_name=lxd ceph.cluster_name=lxd /srv/osd
Error: No value found in "/srv/osd"
mother@infra1:~$ lxc config device add juju-51f46b-23 cephdrive_man disk source=ceph:remote/cephdrive ceph.user_name=lxd ceph.cluster_name=lxd
Device cephdrive_man added to juju-51f46b-23
mother@infra1:~$ lxc start juju-51f46b-23
Error: Failed setting up disk device "cephdrive_man": Failed to open "/etc/ceph/lxd.conf": open /etc/ceph/lxd.conf: no such file or directory
Try `lxc info --show-log juju-51f46b-23` for more info
mother@infra1:~$ lxc config device add juju-51f46b-23 cephdrive disk source=ceph:remote/cephdrive
Device cephdrive added to juju-51f46b-23
mother@infra1:~$ lxc start juju-51f46b-23
Error: Failed setting up device via monitor: Failed adding block device for disk device "cephdrive": Failed adding block device: error opening pool remote: No such file or directory

then I tried another way

mother@infra1:~$ lxc config device add juju-51f46b-23 cephdrive disk source=ceph:remote/cephdrive ceph.cluster_name=lxd ceph.user_name=lxd path=/srv/osd

Device cephdrive added to juju-51f46b-23

mother@infra1:~$ lxc start juju-51f46b-23

Error: Failed setting up disk device "cephdrive": Failed to open "/etc/ceph/lxd.conf": open /etc/ceph/lxd.conf: no such file or directory

Try `lxc info --show-log juju-51f46b-23` for more info

mother@infra1:~$ lxc config device remove juju-51f46b-23 cephdrive

Device cephdrive removed from juju-51f46b-23

mother@infra1:~$ lxc config device add juju-51f46b-23 cephdrive disk source=ceph:remote/cephdrive path=/srv/osd

Device cephdrive added to juju-51f46b-23

mother@infra1:~$ lxc start juju-51f46b-23

Error: Failed setting up device via monitor: Failed adding block device for disk device "cephdrive": Failed adding block device: error opening pool remote: No such file or directory

Try `lxc info --show-log juju-51f46b-23` for more info

mother@infra1:~$ lxc config device add juju-51f46b-23 cephf disk source=ceph:lxd ceph.user_name=lxd ceph.cluster_name=ceph path=/srv/osd

Error: Invalid devices: Device validation failed for "cephdrive": More than one disk device uses the same path "/srv/osd"

mother@infra1:~$ lxc config device remove juju-51f46b-23 cephdrive

Device cephdrive removed from juju-51f46b-23

mother@infra1:~$ lxc config device add juju-51f46b-23 cephf disk source=ceph:lxd ceph.user_name=lxd ceph.cluster_name=ceph path=/srv/osd

Device cephf added to juju-51f46b-23

mother@infra1:~$ lxc start juju-51f46b-23

Error: Put "https://10.10.11.11:8443/1.0/instances/maas-lxd-test-2?project=maas": read tcp 10.10.11.12:37086->10.10.11.11:8443: read: connection reset by peer

Am I doing something wrong?
Currently the only way I am able to add the disk and start the machine is to create custom storage as filesystem not block but it does not meet the osd charm requirements.
@stgraber is it related to the specification of microceph snap?

As always thank you very much for any piece of advice and the patience :slight_smile:

Mateusz

Funny fact, I did some testing on a manually created vm, the lxc attach seems to work just fine with ceph :thinking:

mother@infra2:~/profiles$ lxc storage volume create remote cephdrive size=60GB --type block
mother@infra2:~/profiles$ lxc stop maas-ha-test 

mother@infra2:~/profiles$ lxc storage volume attach remote cephdrive maas-ha-test path=/srv/osd ceph-block

Error: Invalid devices: Device validation failed for "path=/srv/osd": Name can only contain alphanumeric, forward slash, hyphen, colon, underscore and full stop characters

mother@infra2:~/profiles$ lxc storage volume attach remote cephdrive maas-ha-test path=/srv/osd cblock

Error: Invalid devices: Device validation failed for "path=/srv/osd": Name can only contain alphanumeric, forward slash, hyphen, colon, underscore and full stop characters

mother@infra2:~/profiles$ lxc device add maas-ha-test cblock disk pool=remote source=cephdrive path=/srv/osd

Error: unknown command "device" for "lxc"

mother@infra2:~/profiles$ lxc config device add maas-ha-test cblock disk pool=remote source=cephdrive path=/srv/osd

Error: Failed add validation for device "cblock": Custom block volumes cannot have a path defined

mother@infra2:~/profiles$ lxc storage volume attach remote cephdrive maas-ha-test cblock

Error: Failed add validation for device "cephdrive": Custom block volumes cannot have a path defined

mother@infra2:~/profiles$ lxc storage volume attach remote cephdrive maas-ha-test

mother@infra2:~/profiles$ lxc start maas-ha-test

mother@infra2:~/profiles$ lxc shell maas-ha-test

root@maas-ha-test:~# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS

sda 8:0 0 18.6G 0 disk

├─sda1 8:1 0 100M 0 part /boot/efi

└─sda2 8:2 0 18.5G 0 part /

sdb 8:16 0 55.9G 0 disk

root@maas-ha-test:~# exit
mother@infra2:~/profiles$ lxc stop maas-ha-test
mother@infra2:~/profiles$ lxc storage volume detach cephdrive maas-ha-test
mother@infra2:~/profiles$ lxc storage volume detach remote cephdrive maas-ha-test

mother@infra2:~/profiles$ lxc config device add maas-ha-test cblock disk pool=remote source=cephdrive

Device cblock added to maas-ha-test

mother@infra2:~/profiles$ lxc start maas-ha-test

mother@infra2:~/profiles$ lxc config show -e maas-ha-test
root@maas-ha-test:~# lsblk [5/2469]

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS

sda 8:0 0 18.6G 0 disk

├─sda1 8:1 0 100M 0 part /boot/efi

└─sda2 8:2 0 18.5G 0 part /

sdb 8:16 0 55.9G 0 disk

root@maas-ha-test:~# exit

but on the juju generated vm it still does not work

mother@infra1:~$ lxc config device add juju-51f46b-23 cephf disk source=ceph:lxd ceph.user_name=lxd ceph.cluster_name=ceph path=/srv/osd

Error: Invalid devices: Device validation failed for "cephdrive": More than one disk device uses the same path "/srv/osd"

mother@infra1:~$ lxc config device remove juju-51f46b-23 cephdrive

Device cephdrive removed from juju-51f46b-23

mother@infra1:~$ lxc config device add juju-51f46b-23 cephf disk source=ceph:lxd ceph.user_name=lxd ceph.cluster_name=ceph path=/srv/osd

Device cephf added to juju-51f46b-23

mother@infra1:~$ lxc start juju-51f46b-23

The last command totally buried ceph making it unusable and forcing me to reinstall cluster:
I used that occasion to try different configuration:

Error: Failed to run: ceph --name client.admin --cluster lxd osd pool create lxd 32: exit status 1 (Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)'))

@stgraber and @tomp , does microceph use fixed ceph.cluster_name?

Hi,

Is this still a valid issue? I’m struggling the ascertain what you’re trying to do and what the actual problem is?

Are you just trying to attach a custom ceph block volume to a VM?

The problem has been resolved with the lxd release 5.13.

As of now we can consider the issue resolved.

What I was about to do was to deploy the openstack bundle. All machines except for ceph OSD machines were containers. I wanted to deploy 2 ceph OSD vms and then attach some block devices to them in order to test openstack and since juju does not support adding storage from ceph pool.

The error I encountered was the result of that

1 Like