Not enough HDD space within LXD container

Hi, I have a container (type -vm). When I push a big file (9.2GB) into it I get this error:

Error: sftp: "write /home/user/Downloads/xyz.tar.bz2: no space left on device" (SSH_FX_FAILURE)

The storage driver is zfs and there is plenty of space on the host. I thought it managed space in the container automatically? It fails at 18% of file transferred. Makes no difference if I shift it onto a profile with root size 20gb like this…

lxc profile copy default space-is-20gb
lxc profile device set space-is-20gb root size 20GB
lxc profile assign <container> space-is-20gb
lxc restart <container>

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           3.2G  2.5M  3.2G   1% /run
/dev/nvme0n1p2  938G   45G  846G   6% /
tmpfs            16G     0   16G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
/dev/nvme0n1p1  511M  5.3M  506M   2% /boot/efi
tmpfs           3.2G   76K  3.2G   1% /run/user/127
tmpfs           1.0M     0  1.0M   0% /var/snap/lxd/common/ns
tmpfs           3.2G   76K  3.2G   1% /run/user/1000
$ 
$ lxc exec <container> -- df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            97M  4.4M   93M   5% /run
/dev/sda2       3.8G  3.7G     0 100% /
tmpfs           484M     0  484M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            50M   11M   40M  22% /run/lxd_agent
/dev/sda1        99M  3.2M   96M   4% /boot/efi
$ 
$ lxc --version
5.3
$

Any help much appreciated :smiley:

Did you try How to resize storage - LXD documentation ?

Posting my solution, it helps anybody. Initially I installed either zfs or zfs-utils from zfs-fuse - bad move. It couldn’t see any pool. Using correct repro then allowed me to increase default pool, like this:-

sudo truncate -s +20G /var/snap/lxd/common/lxd/disks/default.img
sudo zpool set autoexpand=on default
sudo zpool online -e default /var/snap/lxd/common/lxd/disks/default.img
sudo zpool set autoexpand=off default

So expanded my zfs file from 30GB to 50GB. Non-vm containers then show 22G available with df -h (which is the extra I just added). However existing vm’s are unaffected, still with 3.8G and no extra space.

root@container$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            97M  844K   96M   1% /run
/dev/sda2       3.8G  2.3G  1.5G  61% /
tmpfs           484M     0  484M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            50M   13M   38M  25% /run/lxd_agent
/dev/sda1        99M  3.2M   96M   4% /boot/efi
tmpfs            97M   24K   97M   1% /run/user/1001
root@container$

I expected /dev/sda2 to be bigger.

root@host$ lxc config device show <container>
root:
  path: /
  pool: default
  size: 22GB
  type: disk
root@host$

The solution was hard to find, but it’s this…

root@container:~$ sudo apt install cloud-initramfs-growroot -y
root@container:~$ sudo growpart /dev/sda 2
CHANGED: partition=2 start=206848 old: size=8181727 end=8388575 new: size=42761871 end=42968719
root@container:~$ sudo resize2fs /dev/sda2
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/sda2 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/sda2 is now 5345233 (4k) blocks long.

root@container:~$ 
root@container:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            97M  848K   96M   1% /run
/dev/sda2        20G  2.3G   18G  12% /
tmpfs           484M     0  484M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            50M   13M   38M  25% /run/lxd_agent
/dev/sda1        99M  3.2M   96M   4% /boot/efi
tmpfs            97M   24K   97M   1% /run/user/1001
root@container:~$
1 Like