LXD containers and discrepancies with df and du


#1

Hello, I am currently working on a server running Ubuntu server 18.04 which has approximately 800GB in disk space. In short, I have created an LXD container which I have copied(lxc copy) multiple times on the server(about 30 copies of the same container). Here’s my issue, when I run “df -H /” the command reports that I have used about 150GB of my total disk space. However, when I run “du -sh /var/lib/lxd” the command displays that the directory is approximately 600GB in size. I had also tested the commands when I further increased the number of container copies, df reported little to no difference in disk usage whereas du reported that the /var/lib/lxd directory alone was larger than my 800GB drive. So my question is, why the discrepancy between df and du? Should I be concerned that my containers are or will be taking up all the disk space reported du?


#2

Hi!

You do not mention which storage driver you are using.

Let’s have a look with an example. ZFS supports copy on write. This means that if you create a container such as ubuntu:18.04, it takes up about 335MB.
You can view the size with sudo zfs list.
When you create your second container based on ubuntu:18.04, it will reuse this 335MB of the existing container and sudo zfs list will report any addition space that is really used (such as running apt upgrade) which would be a few MBs.

Commands like du are not aware of this feature because the feature runs at a lower level that du.


#3

Hello,

After double checking, my storage pool is using the btrfs storage driver. So what you’re saying is that the container copies are using the same “space” as the original, and that only the differences or changes are written or take additional space?


#4

Yes, that’s how copy-on-write works.

btrfs also supports this, but you will have to look into the appropriate tools that show the diskspace.

In general, see the Feature comparison at https://lxd.readthedocs.io/en/latest/storage/ for the features of each storage driver.


#5

Awesome, thanks for the clarification