Resize a container on btrfs, lxc 4.x and Ubuntu 18.04

Hi,

I’m getting the error no space left on device on a container.

My configuration

LXD: 4.7
SNAP: 2.47.1
Ubuntu: 18.04
Kernel 4.15.0-118

List of storages

lxc storage list
+---------+-------------+--------+--------------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |                   SOURCE                   | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| default |             | btrfs  | /var/snap/lxd/common/lxd/disks/default.img | 16      |
+---------+-------------+--------+--------------------------------------------+---------+

Information on default storage

lxc storage info default
info:
  description: ""
  driver: btrfs
  name: default
  space used: 483.61GB
  total space: 552.98GB
used by:
  images:
  - 39a938a93d8df472792748ceeb6065c0939cd7af7f4958eb8f027bb650e263f3
  - 436efb69853b4edd9eabef171c971c8e53c37bc34fb038140e85a5f2b4c17d46
  - 9020e74039c6a0e92458bd5f40686bca4d120090e0b7800e5a9fe84c66e31910
  - b789b81c7261b971e45b904c372b19b9a245172a50c6b88780554efbc582dab6
  instances:
  - bug
  - furet
  - grizzly
  - hermine
  - gitlab
  - gitlab-runner
  - gitlab-runner-test
  - haproxy
  - opkg-repo
  - opkg-repo-next
  - yocto-build
  profiles:
  - default

Disk information

sudo df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.8G     0  7.8G   0% /dev
tmpfs           1.6G  2.8M  1.6G   1% /run
/dev/md0        916G  577G  293G  67% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/loop0       70M   70M     0 100% /snap/lxd/17936
/dev/loop1       98M   98M     0 100% /snap/core/10185
/dev/loop2       55M   55M     0 100% /snap/core18/1880
/dev/loop3       56M   56M     0 100% /snap/core18/1885
/dev/loop4       69M   69M     0 100% /snap/lxd/17886
/dev/loop5       98M   98M     0 100% /snap/core/10126
/dev/md127      916G   77M  870G   1% /media/lxd_storage
tmpfs           1.0M     0  1.0M   0% /var/snap/lxd/common/ns
tmpfs           1.6G     0  1.6G   0% /run/user/1001
tmpfs           1.6G     0  1.6G   0% /run/user/1003
/dev/loop6      515G  451G   58G  89% /mnt/test

/media/lxd_storage : This is a second disk we added to create a new pool storage to isolate container like (bug, furet, grizzly, hermine) but we did not have time yet and seriously we are not 100% safe to do it.

Here are my questions

  1. Is it easier to use zfs over btrfs? It seems complicated to do a simple storage resize.

  2. I read somewhere that there seems to be some problem with snap and brtfs? What are they ?

  3. Is it better to reinstall lxd using apt or using snap is the way to go ?

  4. Simply to validate if I understand correctly how the pool storage works. All the containers share the space of the storage pool? So if I have a 1GB pool storage and the first container takes 900MB then there will be 10MB of space left for the second container?

  5. Now my default pool storage seems to have a size of 552.98GB of. How do I proceed to increase its size by 100GB ? All I found on google for btrfs+snap did not work on my side. Exemple of what I’ve done

    truncate -s +100G /var/snap/lxd/common/lxd/disks/default.img
    mkdir /mnt/test
    mount -t btrfs /var/snap/lxd/common/lxd/disks/default.img /mnt/test
    sudo btrfs filesystem resize max /mnt/test/
    reboot
    
  6. I would like to create a second pool storage on /media/lxd_storage to move some of the existing containers. How should I proceed ? This is our engineering server so I would like to limit the downtime if possible.

Best regards,

  1. In this case, it would behave much the same way.
  2. Nope, no known problems with btrfs and the snap
  3. The only version of LXD you’ll get with the deb is 3.0, anything more recent on Ubuntu is going to be the snap
  4. That’s right, your containers are just files on a shared filesystem. You can set quotas on a per-container basis though (which won’t be visible in the container but will prevent exceeding them)
  5. Those instructions look plausibly correct.
  6. LXD does not allow loop-mounted pools outside of /var/snap/lxd/common/lxd/disks. So either /media/lxd_storage is a btrfs filesystem, in which case LXD can create a subvolume on it, or the more supported option would be to unmount /media/lxd_storage, wipe the partition clean and pass that raw partition for LXD to put btrfs on.

Effectively:

  • umount /media/lxd_storage
  • dd if=/dev/zero of=/dev/md127 bs=4M count=10
  • lxc storage create lxd_storage btrfs source=/dev/md127

As for growing your existing one, what’s the current size according to ls -lh /var/snap/lxd/common/lxd/disks/default.img? Your df output above suggests it has 58GB of free space and a total size of 515GB.

@stgraber : Thank you so much for these excellent answers. By the way, we already talked together a couple of months ago. I’m near Montréal :wink:

If I delete 100MB of file from the of the 900MB used of the first container, I imagine that the space is returned to pool storage? In this case, the second container could consume the 110MB ?

Do I understand that because lxd has been installed from snap, it cannot therefore use/create pool outside of the lxd’s snap directory data ?

ls -lh /var/snap/lxd/common/lxd/disks/default.img
-rw------- 1 root root 525G Oct 21 13:46 /var/snap/lxd/common/lxd/disks/default.img

Will this code increase to the size of the default storage pool by adding the new SSD2 / dev/md127 (916GB) to it ?

Also if I well understood, doing that will give me more space but the existing container will probably be left where they are on the actual pool storage (SSD1). In other word, the pool storage will be responsable to use the free space as he wishes.

Okay, so looks like btrfs is properly seeing your current loop file and its size.
You could do the truncate+resize trick again to grow it further.
The reason why it ran out of space despite there being around 10% of free space is because of how btrfs internally fragments metadata and data. You could do a btrfs balance start /mnt/test/ which may improve that split and provide some more space.

As for the new storage, no, it would show up as a completely new LXD storage pool with nothing on it. You can put new containers on it or individually move existing containers to it.

It’s technically possible to join an external disk to your existing loop-backed btrfs pool but I really wouldn’t recommend such RAID0 setup with btrfs, so getting a second empty pool and moving things over to balance things is probably your best bet.

Ok I understand.

Perfect, could advise me on how to move a container from a storage pool to another ? I found this procedure, is it right ?

lxc stop container_name
lxc move container_name temp_container_name -s new_storage_pool
lxc move temp_container_name container_name
lxc start container_name

Why not simply

lxc stop container_name
lxc move container_name container_name -s new_storage_pool
lxc start container_name

Ref: How to move containers to a new storage pool on the same host

By the way, is there a safer way to move it ? I mean (stopping it then copying it and finally removing it). This way if something failed I can restart the old one and there will be no problem.

lxc move is just a client side shortcut for lxc copy combined with lxc delete of the source.

That’s why you can’t do lxc move container_name container_name -s new_storage_pool because you can’t copy the container onto itself.

You can certainly do the lxc copy and lxc delete yourself rather than rely on lxc move to do it for you though if that makes you feel more at ease though the result should be identical. A failure during the copy stage of lxc move will prevent the delete stage from kicking in, leaving you with your source container left intact.

1 Like

Ok I though this line was moving the container from its own storage pool to a different. So there will be no conflict ? I mean

lxc stop container_name  # Stop the container
lxc move container_name container_name -s new_storage_pool  # Move the container_name to a new storage pool
lxc start container_name # Start the moved container

lxc move container_name container_name -s new_storage_pool expands to:

  • lxc copy container_name container_name -s new_storage_pool
  • lxc delete container_name

This cannot work as the source and target container names are the same. It will fail telling you the name is already in use. That’s why you need to do a move to a different name then rename it back to the original, or if you want it the long way, that’s:

  • lxc stop container_name
  • lxc copy container_name container_name1 --storage new_storage_pool
  • lxc delete container_name
  • lxc move container_name1 container_name
  • lxc start container_name

That’s equivalent to:

  • lxc stop container_name
  • lxc move container_name container_name1 --storage new_storage_pool
  • lxc move container_name1 container_name
  • lxc start container_name
1 Like

Thank you so much @stgraber.

With you help I succeeded to

  1. Resized my btrfs default storage pool
  2. Created a second storage pool on a second SSD
  3. Moved a couple of existing container to the new storage pool.

You’ve made my day :smiley:

Hi @stgraber,

I have encountered a problem during the reboot this morning. I got the error

[FAILED] Failed to mount /media/lxd_storage.
...
EXT4-fs (md127): VFS: Can't find ext4 filesystem

So I looked in the fstab and found this line

UUID=726f322e-f792-402b-8c4c-252b2237e3cf /media/lxd_storage   ext4      errors=remount-ro 0       1

Initially, this SSD was formatted with an ext4. But if I understand correctly, we destroyed this filesystem with the following line

dd if=/dev/zero of=/dev/md127 bs=4M count=10

But I’m not sure to understand why we did not create a file system on this disk partition. Is it because it is handled by the btrfs system and mounting it won’t allow us to see much relevant ?

Drop that entry from fstab, you don’t need it anymore and it indeed won’t work.

LXD handles the creation, tracking and mounting of the filesystem on that partition, so no point in messing with it externally.

1 Like