All containers suddenly out of disk space

I now realize that my previous topic was actually describing a symptom of this problem, so I’ve created a new topic.
When I tried to start Jupyter-lab on a container, I got this error:

Failed to write server-info to /home/ubuntu/.local/share/jupyter/runtime/jpserver-596.json: [Errno 28] No space left on device: '/home/ubuntu/.local/share/jupyter/runtime/jpserver-596.json'

It turns out that, not only is there no space available on that container:

ubuntu@ml:~$ df -h
Filesystem             Size  Used Avail Use% Mounted on
default/containers/ml  4.0G  4.0G     0 100% /
none                   492K  4.0K  488K   1% /dev
udev                   6.8G     0  6.8G   0% /dev/tty
tmpfs                  100K     0  100K   0% /dev/lxd
tmpfs                  100K     0  100K   0% /dev/.lxd-mounts
tmpfs                  6.8G     0  6.8G   0% /dev/shm
tmpfs                  1.4G  236K  1.4G   1% /run
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  6.8G     0  6.8G   0% /sys/fs/cgroup
tmpfs                  1.4G     0  1.4G   0% /run/user/1000
[1]+  Exit 1                  jupyter-lab

But all my other containers have the same problem. There’s no shortage of space on the host system - there are 283GB free there. Is there anything capping the overall space allocated to containers in general?
Thanks,

Can you show zfs list -t all on your host? I suspect your zpool is out of space.

I’m actually not using ZFS:

$ df -T
Filesystem     Type     1K-blocks      Used Available Use% Mounted on
udev           devtmpfs   7090952         0   7090952   0% /dev
tmpfs          tmpfs      1424740      2460   1422280   1% /run
/dev/sda2      ext4     479152840 152938320 301805128  34% /
tmpfs          tmpfs      7123688    207744   6915944   3% /dev/shm
tmpfs          tmpfs         5120         4      5116   1% /run/lock
tmpfs          tmpfs      7123688         0   7123688   0% /sys/fs/cgroup
/dev/loop3     squashfs    101888    101888         0 100% /snap/core/11993
/dev/loop2     squashfs     56832     56832         0 100% /snap/core18/2253
/dev/loop4     squashfs      2048      2048         0 100% /snap/fast/4
/dev/loop6     squashfs    166784    166784         0 100% /snap/gnome-3-28-1804/145
/dev/loop0     squashfs       128       128         0 100% /snap/bare/5
/dev/loop7     squashfs    168832    168832         0 100% /snap/gnome-3-28-1804/161
/dev/loop5     squashfs    142976    142976         0 100% /snap/skype/190
/dev/loop1     squashfs    148096    148096         0 100% /snap/chromium/1810
/dev/loop8     squashfs     66688     66688         0 100% /snap/gtk-common-themes/1515
/dev/loop9     squashfs    224256    224256         0 100% /snap/gnome-3-34-1804/72
/dev/loop10    squashfs     66816     66816         0 100% /snap/gtk-common-themes/1519
/dev/loop11    squashfs     56832     56832         0 100% /snap/core18/2246
/dev/loop14    squashfs    151424    151424         0 100% /snap/chromium/1827
/dev/loop13    squashfs     56064     56064         0 100% /snap/hugo/11444
/dev/loop12    squashfs     95232     95232         0 100% /snap/youtube-dl/4572
/dev/loop15    squashfs    106752    106752         0 100% /snap/ipfs-desktop/30
/dev/loop16    squashfs    137728    137728         0 100% /snap/skype/194
/dev/sda1      vfat        523248      5356    517892   2% /boot/efi
/dev/loop17    squashfs     43264     43264         0 100% /snap/snapd/14066
/dev/loop18    squashfs    132480    132480         0 100% /snap/slack/48
/dev/loop19    squashfs    132480    132480         0 100% /snap/slack/47
/dev/loop20    squashfs    166016    166016         0 100% /snap/spotify/53
/dev/loop21    squashfs    171392    171392         0 100% /snap/spotify/56
/dev/loop22    squashfs     63360     63360         0 100% /snap/core20/1169
/dev/loop23    squashfs     15744     15744         0 100% /snap/wormhole/112
/dev/loop24    squashfs    224256    224256         0 100% /snap/gnome-3-34-1804/77
/dev/loop25    squashfs     33280     33280         0 100% /snap/snapd/13640
/dev/loop26    squashfs     63360     63360         0 100% /snap/core20/1242
/dev/loop27    squashfs     68864     68864         0 100% /snap/lxd/21803
/dev/loop28    squashfs    101888    101888         0 100% /snap/core/11798
/dev/loop29    squashfs     52224     52224         0 100% /snap/snap-store/547
/dev/loop30    squashfs     22656     22656         0 100% /snap/ipfs/2235
/dev/loop31    squashfs     68864     68864         0 100% /snap/lxd/21835
/dev/loop32    squashfs     55552     55552         0 100% /snap/snap-store/558
/dev/loop33    squashfs     20864     20864         0 100% /snap/wormhole/349
/dev/loop34    squashfs     22784     22784         0 100% /snap/ipfs/2525
/dev/loop35    squashfs     56064     56064         0 100% /snap/hugo/11427
/dev/loop36    squashfs    253952    253952         0 100% /snap/gnome-3-38-2004/87
/dev/loop37    squashfs     95232     95232         0 100% /snap/youtube-dl/4568
/dev/loop38    squashfs    106624    106624         0 100% /snap/ipfs-desktop/29
tmpfs          tmpfs      1424736        16   1424720   1% /run/user/125
tmpfs          tmpfs         1024         0      1024   0% /var/snap/lxd/common/ns
tmpfs          tmpfs      1424736        72   1424664   1% /run/user/1000

LXD is, default/containers/ml 4.0G 4.0G 0 100% / is a ZFS mountpoint.
lxc storage list would likely confirm it.

1 Like

You’re correct (of course):

lxc storage list
[sudo] password for dbclinton: 
+---------+-------------+--------+--------------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |                   SOURCE                   | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| default |             | zfs    | /var/snap/lxd/common/lxd/disks/default.img | 9       |
+---------+-------------+--------+--------------------------------------------+---------+

I can’t remember why I didn’t use ZFS when I originally installed LXD. I don’t suppose there’s any easy way to change that now without having to rebuild all my containers.
By the way, I deleted a half dozen or so old and unused containers and I’ve now got enough space in the remaining containers for the time being.

You can add another storage pool using lxc storage create then move your instances with lxc move NAME --storage NEW-STORAGE and eventually change your default profile over to your new pool and delete the old one. Takes some effort and some space, but can be done.

1 Like

It sounds like it might be worth the effort. I’ve been using LX containers for many years now and they’ve become a fundamental part of just about everything I do. I definitely want a stable container environment.
Thanks for your fantastic work on this!

change the default profile with lxc move NAME --storage NEW-STORAGE