Clean up loop file LVM storage

Hi,

What is the best way to clean up a storage? I removed most of the images and instances and this is the output:

lxc storage info default
info:
  description: ""
  driver: lvm
  name: default
  space used: 31.99GB
  total space: 797.85GB
used by:
  images:
  - 2bfabe17d255475cd95a666b79aeda9cc3253c380fcb47ddffbdf32197ebbab1
  - 7ae96d5ab3e84511b242a9569cd875e19117e7413f32ece225bfe2af2819918f
  - ccd9547bd43a038f5d406987d98668a462a6a3a3d7a0e4d9308ee20dc79030f3
  instances:
  - test-120cae
  profiles:
  - default

What looks correct. However if I check actual used space with du:

# du -sh * 2>/dev/null
12K	backups
1.6M	cache
4.0K	containers
318M	database
4.0K	devices
4.0K	devlxd
450G	disks
9.5G	images
5.6M	logs
20K	networks
24K	security
4.0K	server.crt
4.0K	server.key
0	shmounts
4.0K	snapshots
52K	storage-pools
0	unix.socket
4.0K	virtual-machines
4.0K	virtual-machines-snapshots

It shows that the disk folder occupies 450G. It is quite a big difference between 32G and 450G. Is it some sort of cache? Is there a way to clean it up too?

Thanks!

Please can you show the contents output of ls -lah disks directory as well as lxc storage show default.

@tomp thanks for quick reply!

# ls -lah disks
total 450G
drwx------  2 root root 4.0K May 17 14:36 .
drwx--x--x 17 root root 4.0K Aug  3 12:16 ..
-rw-------  1 root root 746G Aug  3 12:25 default.img

And

lxc storage show default
config:
  lvm.thinpool_name: LXDThinPool
  lvm.vg_name: default
  size: 800GB
  source: /var/snap/lxd/common/lxd/disks/default.img
  volume.size: 20GB
description: ""
name: default
driver: lvm
used_by:
- /1.0/images/2bfabe17d255475cd95a666b79aeda9cc3253c380fcb47ddffbdf32197ebbab1
- /1.0/images/7ae96d5ab3e84511b242a9569cd875e19117e7413f32ece225bfe2af2819918f
- /1.0/images/ccd9547bd43a038f5d406987d98668a462a6a3a3d7a0e4d9308ee20dc79030f3
- /1.0/profiles/default
status: Created
locations:
- none

OK so the issue here is that your LVM storage pool is using a loopback image file /var/snap/lxd/common/lxd/disks/default.img which has a fixed maximum size of 800GB. Then the LVM volume group is created ontop of that. The disk image file is created as a sparse file, which means it won’t use the full 800GB straight away and instead it will consume space as the LVM subsystem uses it.

However I don’t think the LVM subsystem will release the space on the virtual disk once it has been used, even if the volumes are deleted. It will still keep the space (as normally the disk or partition size would be fixed if it was a real disk).

I would suggest you create a new smaller LVM pool and then copy the instances to it, then remove the old one.

Ah so it is “there” and available, but just “reserved”? There is 875G available in total on the server, so I think the current storage size is fine.

To get the size of the storage occupied by images and containers, it is better to look at lxc storage info command?

# lxc storage info default
info:
  description: ""
  driver: lvm
  name: default
  space used: 31.99GB

Yes so the sparse file is 800GB in size, so thats the maximum it will grow to. When you use du it is showing the size of the allocated blocks in that image file (circa 450GB), but not all of that is “used”, its just not available to other files on the system.

Then when you create LVM volumes, they may use some of that already allocated space or they may use more space (up to the maximum size of the disk image) - this depends on how LVM allocates the blocks.

When you delete or resize and LVM volume the blocks are reused so the lxc storage info command is just showing how much of the volume group is actively assigned to volumes.

You can see more by running sudo lvs.

Its worth also noting that this same thing occurs at the lower level inside the LVM volumes themselves. You may have a volume of, say, 10GB, and the filesystem inside the container shows less usage than the LVM subsystem (via lvs) sees because the blocks have been used by the filesystem by not released when a file is deleted/reduced in size (its still available for reuse though).

Thanks for details!