Lxd cluster: storage pool sizes limited on one node

Hello @all,

i am moving my containers from one server to a new one.
For this purpose I created a cluster and already moved a few containers from the old to the new server.
With the new server/node I did not create any configuration like storage pool, it took over all settings from the existing server/node (old).

I use LXD as Snap version 4.19 and BTRFS on both servers.

I don’t use an image mounted via loop, i use the FS of the server.

I don’t understand why the entire capacity of the FS is available on the old server and only 30GB on the new one ?!

How can I change this?


old-server# lxc storage list

  • --------- + -------- + ------------- + --------- + ------ — +
    | NAME | DRIVER | DESCRIPTION | USED ​​BY | STATE |
  • --------- + -------- + ------------- + --------- + ------ — +
    | default | btrfs | | 10 | CREATED |
  • --------- + -------- + ------------- + --------- + ------ — +

old-server# lxc storage info default
info:
description: “”
driver: btrfs
name: default
space used: 1.95TB
total space: 2.98TB
used by:
instances:

profiles:

new-server# lxc storage list

  • --------- + -------- + ------------- + --------- + ------ — +
    | NAME | DRIVER | DESCRIPTION | USED ​​BY | STATE |
  • --------- + -------- + ------------- + --------- + ------ — +
    | default | btrfs | | 7 | CREATED |
  • --------- + -------- + ------------- + --------- + ------ — +

new-server # lxc storage info default
info:
description: “”
driver: btrfs
name: default
space used: 19.19GB
total space: 30.00GB
used by:
instances:

profiles:

Thanks Frank

Can you show lxc storage show default --target NAME for both machines where NAME is their name from lxc cluster list?

Now I see that the new server is using a disk image.
Is it possible to fix that?
I think on both server the storage pool have to be the same and i dont see any option to rename a storage pool?!
Since I performed a cluster join with the new server in setup (init), I was not even able to create the pool.

$ lxc storage show default --target new-server
config:
size: 30GB
source: /var/snap/lxd/common/lxd/disks/default.img
description: “”
name: default
driver: btrfs
used_by:

and

$ lxc storage show default --target old-server
config:
source: /var/snap/lxd/common/lxd/storage-pools/default
volatile.initial_source: /var/lib/lxd/storage-pools/default
description: “”
name: default
driver: btrfs
used_by:

So there are two “clean” ways to do this:

  1. Remove new-server from the cluster and join it back, then during joining make sure to set the source property of your default pool to /var/snap/lxd/common/lxd/storage-pools/default.
  2. Remove the pool from the entire cluster and add it back with the correct source= on both systems (unlikely to be an option for you since you have stuff on it).
  3. Manually reconfigure new-server with some DB mangling and restart. For that I’d need at least stat -f /var/snap/lxd/common/lxd/ on new-server to check that it would work?

Hi Stéphane,

As you have seen, it was difficult for me to recognize that a disk-image was being used. In the single instance mode it was easier using “lxc storage info NAME” and look at the “source” column.

I currently have containers on both nodes because I’m moving from my old Server the the new one.

Therefore, I first decided to expand the disk file:

# truncate -s + 200G /var/snap/lxd/common/lxd/disks/default.img

I did not manage to expand the BTRFS from “inside”:
# nsenter --mount = /run/snapd/ns/lxd.mnt ./snap/lxd/current/bin/btrfs filesystem resize max /var/snap/lxd/common/lxd/storage-pools/default Resize '/ var / snap / lxd / common / lxd / storage-pools / default' of 'max'

But the available size was still the old one.
Then I mounted the disk-image via the loop (Resize LXD BTRFS storage space - #9 by gpatel-fr) not nice but effective.

I now have enough space to complete the migration.

I then plan to dissolve the cluster and create a new pool without a disk-image.
After moving to the new pool, I can delete the disk-image.

This behavior is not optimal(!) and I suggest that the initialization be changed so that the user can choose how the pool is created.

Have you any Ideas or suggestions?

Thanks Frank

So in a cluster, the storage pool config is split between “global” and member “local” config keys.

When doing a straight lxc storage show <pool> you will only see the global config keys.
However the locations field will show you the cluster members that the pool exists on.

You can then run lxc storage show <pool> --target=<member> to see the local config keys for that member.

E.g.

lxc storage show local
config: {}
description: ""
name: local
driver: dir
used_by:
- /1.0/instances/c1
- /1.0/profiles/default
status: Created
locations:
- v2
- v1
lxc storage show local --target=v2
config:
  source: /var/snap/lxd/common/lxd/storage-pools/local
description: ""
name: local
driver: dir
used_by:
- /1.0/instances/vs
- /1.0/profiles/default
status: Created
locations:
- v2
- v1

When joining a new server to the cluster during lxd init there is a stage where it asks you to populate local config for the joining member, e.g.

lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this node? [default=10.250.73.17]: 
Are you joining an existing cluster? (yes/no) [default=no]: yes
Do you have a join token? (yes/no/[token]) [default=no]: eyJzZXJ2ZXJfbmFtZSI6InYzIiwiZmluZ2VycHJpbnQiOiIwMDFiNTJhNTA3MGExYTEyYjBlMmM2OGRiMTc3NmIxYmIzMzhiMjQwYzRlNGZhMzE3ZjYyNzNmZDQ3ZTg0MzVjIiwiYWRkcmVzc2VzIjpbIjEwLjI1MC43My4xNTo4NDQzIiwiMTAuMjUwLjczLjEzOjg0NDMiXSwic2VjcmV0IjoiMzhiN2Y1Mjg5YzIwNjU1ZGU4MDMzNzM0MDllYzFmMmE5YmZhYzA4YWFiZWY3MDY4MzU1ZmU1YTMwODIwNTY1MyJ9
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "dir2": /root/dir2
Choose "source" property for storage pool "local": /root/local
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
lxc storage show local
To start your first instance, try: lxc launch ubuntu:20.04

config: {}
description: ""
name: local
driver: dir
used_by:
- /1.0/profiles/default
status: Created
locations:
- v1
- v2
- v3
lxc storage show local --target=v3
config:
  source: /root/local
description: ""
name: local
driver: dir
used_by:
- /1.0/profiles/default
status: Created
locations:
- v1
- v2
- v3