The documentation (Storage pools | LXD) says
The two best options for use with LXD are ZFS and btrfs. … the directory backend is to be considered as a last resort option. … [it] is terribly slow and inefficient as it can’t perform
instant copies or snapshots and so needs to copy the entirety of the instance’s storage every time.
I’ve been wondering for a while why the directory backend option is the option of last resort. In particular, how is it slow to use the filesystem you live in? Also, wouldn’t a loopback device be the option of last resort? It’s obvious that snapshots and copies are going to be slow on, say an ext4 based storage pool compared to ZFS or btrfs, where COW makes such things instantaneous, but is anything else going to be slower as well? If so, why?
Second, how exactly is dedicating a full disk or partition to your LXD storage pool faster or better than just using a directory backend? I’m not seeing this. I have several legacy systems with very large hardware RAID partitions, so using ZFS would be a challenge. I could create a separate volume on the RAID device for my ZFS storage pool, but I don’t understand what advantages this affords me.
In most of my use cases I’ll be creating snapshots only occasionally (i.e. most the volatile data will live outside the container, accessed through a bind mount or something) and will make copies even more rarely. If these are the only issues with using a directory backend, then this isn’t a concern for me.