Can't start container -> no space left on device

cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=2006116k,nr_inodes=501529,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=403892k,mode=755 0 0
/dev/vda1 / ext4 rw,noatime,nodiratime,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11172 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/loop0 /snap/core18/2074 squashfs ro,nodev,relatime 0 0
/dev/loop1 /snap/core18/2128 squashfs ro,nodev,relatime 0 0
/dev/loop2 /snap/snapd/12398 squashfs ro,nodev,relatime 0 0
/dev/loop3 /snap/go/7954 squashfs ro,nodev,relatime 0 0
/dev/loop4 /snap/snapd/12704 squashfs ro,nodev,relatime 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=403888k,mode=700,uid=1000,gid=1000 0 0
tmpfs /var/lib/lxd/shmounts tmpfs rw,relatime,size=100k,mode=711 0 0
tmpfs /var/lib/lxd/devlxd tmpfs rw,relatime,size=100k,mode=755 0 0
/dev/loop5 /var/lib/lxd/storage-pools/nextcloud btrfs rw,relatime,space_cache,user_subvol_rm_allowed,subvolid=5,subvol=/ 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

now I’ve increased the /dev/vda1 to the 120GB, but still can’t start the webserver

The discourse says that Ive reached the maximum daily replies like a new user :smiley: so I probably can’t reply more

REPLY for the last post

I’ve rebooted it, during the increasing the /dev/vda1 so now the df -h looks like this

Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           395M  944K  394M   1% /run
/dev/vda1       118G   46G   67G  41% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0       56M   56M     0 100% /snap/core18/2074
/dev/loop1       56M   56M     0 100% /snap/core18/2128
/dev/loop2       33M   33M     0 100% /snap/snapd/12398
/dev/loop3       91M   91M     0 100% /snap/go/7954
/dev/loop4       33M   33M     0 100% /snap/snapd/12704
tmpfs           395M     0  395M   0% /run/user/1000
tmpfs           100K     0  100K   0% /var/lib/lxd/shmounts
tmpfs           100K     0  100K   0% /var/lib/lxd/devlxd
/dev/loop5       38G  7.6G   29G  22% /var/lib/lxd/storage-pools/nextcloud
/dev/loop6       40G   22G   19G  54% /var/lib/lxd/storage-pools/webserver

but the ncdu of / is 93GB

and still can’t start the container :confused:

So its ext4

/dev/vda1 / ext4 rw,noatime,nodiratime,data=ordered 0 0

Yes so I think we can agree you’ve used all the / disk space.

However I agree its weird that df -h isn’t reporting that.
Have you tried rebooting to see if that brings them into sync?

I had to create a new account here for continue cahtting…

Is could be the problem around the config of storage which thinks that the storage size is only 17GB?

lxc storage info webserver
info:
  description: ""
  driver: btrfs
  name: webserver
  space used: 22.87GB
  total space: 42.77GB
used by:
  containers:
  - kmfire
  - kmfire
  - mysql1
  - webserver
  - webserver
  images:
  - 9c939141a94bbdcfd29b0b3be6dc135b9d3e8d73b19235f06de4593898e8b960
  - fab57376cf04b817d43804d079321241ce98d3b5c2296f1a41541de6c100ab09
  profiles:
  - default
  - kmfire
  - webserver
lxc storage show webserver
config:
  size: 17GB
  source: /var/lib/lxd/disks/webserver.img
description: ""
name: webserver
driver: btrfs
used_by:
- /1.0/containers/kmfire
- /1.0/containers/kmfire/snapshots/snap0
- /1.0/containers/mysql1
- /1.0/containers/webserver
- /1.0/containers/webserver/snapshots/s1
- /1.0/images/9c939141a94bbdcfd29b0b3be6dc135b9d3e8d73b19235f06de4593898e8b960
- /1.0/images/fab57376cf04b817d43804d079321241ce98d3b5c2296f1a41541de6c100ab09
- /1.0/profiles/default
- /1.0/profiles/kmfire
- /1.0/profiles/webserver
status: Created
locations:
- none

If I try to set the size

lxc storage set webserver size 40GB
Error: The [size] properties cannot be changed for "btrfs" storage pools

So I have it! I’ve tried everything… reboot many times, remove snapshots, but still wasn’t work… so then I’ve tried increase webserver pool again, by 10GB and now it works… So I am little bit scared by using LXC if I can’t see the real information about capacity of the a) whole host capacity b) storage pool capacity

Well this sounds more like an issue with BTRFS and its reporting tooling than LXD (not LXC btw).

How did you “increase webserver pool again, by 10GB”?

LXD doesn’t officially support resizing storage pools themselves (which is why lxc storage set webserver size didn’t work).

As you’re using a loop file for the backing of the BTRFS storage pool you have several layers at play:

  1. The disk space and disk reporting of the ext4 / partition containing the BTRFS image file - this is a sparse file, so it will have a fixed size, but its actually disk usage will grown over time (and not be released back to the host OS) up until the size of the disk image file itself. So it maybe that whilst the disk image file itself was 40GB size, df didn’t see all of it in use (but this depends on how BTRFS actually allocates the blocks inside the image file which isn’t something LXD can control).
  2. The next layer is the BTRFS filesystem inside the loop image file, this will have its own “size” which may be smaller than the maximum size of the loop file itself. Its possible at some point the disk image file was grown but not the BTRFS filesystem inside it.

Because of the sparse loop files it is also possible to have a sparse file or multiple sparse files that exceed the total storage available on the partition where they are stored, but that this will only become a problem once the blocks have actually been written.

We’ve also seen some strange behaviour with BTRFS disk utilisation reporting (such as BTRFS quota is reached when filling up VM disk image file · Issue #9124 · lxc/lxd · GitHub) so in general BTRFS is quite an unusual filesystem in that regard.

Are you specifically looking to use BTRFS or could you use a different filesystem (such as LVM or ZFS) which have more traditional behaviours.

If you do need to use BTRFS then I would be tempted to create a dedicated partition for it, rather than loop files and then use that as the BTRFS pool source (which will also perform better than a loop file).

Ok, I will try to create new one pool ZFS what is the recommended way for transfer all the data between pools?

How did you “increase webserver pool again, by 10GB”?

the same way as I wrote above. by btrfs

You should be able to move each instance using:

lxc move <instance> -s <new pool>