ZFS vs LVM container size

So, ZFS has been an utter nightmare on my LXD deployment of over 250 containers. First I had to rebuild my pool because deduplication was using A LOT more RAM than reasonably expected (ate through my 48GB of RAM). Then I had to rebuild because of a ZFS/kernel issue which would not let me restart containers on the new pool that where build with an image on the old pool. And now, finally, I am plagued by almost daily arc_prune storms, despite playing with every single zfs modprobe setting and even rebuilding zfs from source. :tired_face:

I do not trust BTRFS, or my understanding of it, enough to choose it for production, so LVM is my next move.

I have built an LVM thinpool and I see that the reported size (df -h) within the actual container is about double that of the one reported on containers in the ZFS pool (778M vs 369M)! Also, when checking pool utilization with lxc storage info, i get “space used: 0B”, while the ZFS reports a more realistic number.

I’m at my wit’s end. Is LVM a viable choice?

ZFS unless you configured LXD with volume.zfs.use_refquota=true would have reported the deviation from the base image as the used space rather than reporting the entirety of the container’s rootfs as the used space.

As LVM gets you a block device with an individual filesystem per container, the disk usage you see in the container matches the entirety of the files in the container’s rootfs, explaining the difference.

I’m not sure why disk space reporting would be broken on LVM, so that may be a bug. Though I would expect all block based backends (LVM and CEPH) to only have space reporting working when the container is running as the block device would otherwise not be mounted.

1 Like

Thanks for the clear answer.

The reported storage usage is with the containers running though. Should I open an issue you think

?

Yeah, an issue would be good