How to check lxd container size and how much space they are tacking

i am new to lxd , can any one help how to check lxc container size and how much space they are taking , i created a lxd container when i am trying to install few packages in it , its giving error that no space available.

1 Like

First of all, there is no integrated way to check how much space containers are taking. I think that the reason is that developers want to commit only to solutions that are giving exact results that can’t be contested. However, disk space calculation has no exact answer for modern storage systems (think to wasted space due to block size, disk deduplication for example)
So the best answer may depend on your storage system. Also the lxd version matters.
This said, for lxd 3.12 (snap version) and the default storage, I use:

sudo nsenter -t $(pgrep daemon.start) -m – du -m -d 2 /var/snap/lxd/common/lxd/storage-pools/default

it’s a bit slow and certainly not ‘exact’ but I have no need for more. For LTS LXD version (not snap) you can drop the nsenter stuff and have to change the path for the storage.

Hi!

When you first setup LXD, you assign some space for all the containers. This space is called a storage pool. Here is mine. The storage pool is called (in my case) lxd, I am using the zfs storage driver and I have 51 containers in total.

$ lxc storage list
+------+-------------+--------+--------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+------+-------------+--------+--------+---------+
| lxd  |             | zfs    | lxd    | 51      |
+------+-------------+--------+--------+---------+

Let’s see some more details about this storage pool. We run lxc storage info with the name of the storage pool (in my case, lxd). It shows the total space that my containers are using (around 45GB), and also the total space that I have dedicated for the storage pool (around 215GB).

$ lxc storage info lxd
info:
  description: ""
  driver: zfs
  name: lxd
  space used: 45.20GB
  total space: 215.88GB
used by:
  containers:
...

In my case, I am using the ZFS storage driver, and I can use some ZFS commands to get a rough overall idea of how much space is used by each container. Below it reaffirms that I am using around 40-plus gigabytes for my containers, out of a total of about 200 gigabytes of available space.

$ zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
lxd                                    42,3G   201G    24K  none
lxd/containers                  42,8G   201G    24K  none
lxd/containers/c1             486M   201G   816M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/c1
lxd/containers/c2             901M   201G  1,11G  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/c2
...

If you have setup LXD with the default settings, you probably created a storage pool of size 15GB (the default).
Can you verify that this is the case with your installation? Run the above commands and report back.

There are two ways to solve a small storage pool problem; you can either reinstall LXD and create a bigger storage pool, or you can create a second bigger storage pool, and move some containers there. Ask for details for any of these two.

So I’m using dir instead of zfs. Given that, is du the way to figure out the side of the container? If so, I’m a little surprised, because I seem to be using 3.1G, but lxc storage info is telling me it’s using 8.11GB/10.25GB. Which is right?

And with regards to increasing the the storage pool, when you’re saying reinstalling LXD, is that something that would not disturb the containers themselves?

When using dir, unless you happen to have project quotas enabled on your fileystem (very unlikely), then the reported usage by the kernel would be that of the entire partition (should match df -h inside the container).

Well, there again, df -h is telling me /dev/root is 7.6/9.6G. So which is right?

And how do I go about increasing the overall pool to make room for additional containers?

7.6/9.6 is the same as 8.11/10.25 when account for the fact that one is measuring in GB and the other in GiB (1000 vs 1024).

For the dir backend, there is no real pool size restriction to speak off, the available space is whatever disk space you have available on the parent device, typically your system’s root device.

Derp, I should have noticed the same ratio. Thanks for pointing out the obvious!

One other question, though: if dir is unlimited except by available disk space, what is the total available space about?

total space is the size of the partition, space used is how much of the partition is in use, the free space if the total minus the used one.

But you’re saying, essentially, that the partition will expand as needed, limited only by total host disk space?

The dir backend doesn’t partition anything. It’s literally just a bunch of files inside a regular directory. That directory is stored on whatever partition is behind /var.

In most cases (unless you have done some fancy partitioning), that means that it’s stored on the same partition as your / and that’s therefore the space you see.

That means that:

  • df -h / on the host
  • df -h / in any of the containers
  • lxc storage info default

Will all match as far as used and free space (minus unit difference).

And if I use LVM? I can see output of lxc storage info but how I can see the details of the volume in use?

lxc storage volume show will show you the volume config, usage can normally be seen with lxc info against the container.

3 Likes