I have created a LXD cluster with btrfs backend but when I try to resize it using:
sudo btrfs fi resize max /dev/sdb
ERROR: resize works on mounted filesystems and accepts only
directories as argument. Passing file containing a btrfs image
would resize the underlying filesystem instead of the image.
The output
sudo btrfs fi show
Label: 'local' uuid: xxxxxxxxxxxxxxxxx
Total devices 1 FS bytes used 328.54GiB
devid 1 size 500.00GiB used 339.01GiB path /dev/sdb
The current capacity of the storage is 500Gib but the underlying disk is 1TB.
I did not mount it. I gave /dev/sdb as device while installing lxd. The storage pool name is local.
This should be automatically mounted by lxd. But I can’t find the exact path since containers and images use sub volumes.
If using the deb, it should be /var/lib/lxd/storage-pools/default, if using the snap, it’ll be /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/default
sudo btrfs fi show /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/local
ERROR: not a valid btrfs filesystem: /var/snap/lxd/common/lxd/storage-pools/local
When I setup the cluster I used btrfs. When I run subvolume command following is the list
btrfs subvolume list -a /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/local
ID 257 gen 427271 top level 5 path containers
ID 258 gen 11072 top level 5 path containers-snapshots
ID 259 gen 426628 top level 5 path images
ID 260 gen 11 top level 5 path custom
ID 348 gen 3790 top level 259 path <FS_TREE>/images/60d0f6301992afad6d64fffb2810c9607c6c5a2d3a5a2f62a20c8d928e2b0e9d
ID 406 gen 8527 top level 259 path <FS_TREE>/images/3cbfb80e533b3c176af3145549b11897a40786141f0482f4c445284acd7e875d
ID 434 gen 24284 top level 259 path <FS_TREE>/images/9879a79ac2b208c05af769089f0a6c3cbea8529571e056c82e96f1468cd1f610
ID 441 gen 25511 top level 259 path <FS_TREE>/images/0d8384b28bbcb391c5df13dcaad1dfb3d1f3f72eac1f45174524ce56939f5de1
ID 442 gen 218969 top level 257 path <FS_TREE>/containers/haproxy-8
ID 448 gen 474998 top level 257 path <FS_TREE>/containers/master-monitor-1
ID 509 gen 474998 top level 257 path <FS_TREE>/containers/lfs-master
ID 522 gen 199420 top level 259 path <FS_TREE>/images/7e8633da9dfc800230c7330cf04e9f284e82e26ddbc1757448c29c25db80f1e4
ID 531 gen 241595 top level 259 path <FS_TREE>/images/bbb592c417b69ff8eac82df58ceeace2b4f58c09339e7ffc019a5069928648da
ID 538 gen 245506 top level 259 path <FS_TREE>/images/c76b5028c7566d4f3488c4ae26ea9f5794b24d6347e1849ce945baa6ae769016
ID 585 gen 474991 top level 257 path <FS_TREE>/containers/test
ID 590 gen 369887 top level 259 path <FS_TREE>/images/c395a7105278712478ec1dbfaab1865593fc11292f99afe01d5b94f1c34a9a3a
ID 805 gen 426609 top level 259 path <FS_TREE>/images/d72ae2e5073f20450c5260e6f227484c23452a46c6bb553ffe6be55e48602bb4
ID 808 gen 426628 top level 259 path <FS_TREE>/images/776ad97a370ccd1d9a4cc211e8897e2da5d0fb7c6f8c0ab22f8d112f9fea4cea
For the record, this is working fine with snap lxd 3.10
sudo nsenter -t $(pgrep daemon.start) -m – /snap/lxd/current/bin/btrfs fi show /var/snap/lxd/common/lxd/storage-pools/default
I solved the problem myself by stopping lxd services (sudo snap stop lxd) but this may be a way to work around this limitation, trying it just now to resize… no it does not work. btrfs says it resizes but it don’t as always.
It’s not pretty. I have not found a way to do that without rebooting. Best I have found is to stop lxd, expand the file with truncate -s, restart the computer, stop lxd, do the btrfs filesystem resize max and it seems to work more or less reliably. If I do it immediately after resizing the file it does not work at all, the new size is not used.
I can understand why using a dedicated device is the recommended option, I used it for all my install after the first one where I picked the default option, I never had any problem like that. I just tolerate this one for now but when I will install everything on a new disk I will get rid of it and do everywhere with dedicated devices The sparse file is the default and it’s pretty easy to setup for beginners but after it’s a pain to manage.