Research has confirmed that running the LXC storage system on ZFS (not loop files, but instead, partitioned storage pools) yields the best performance.
This makes sense, but what is not clear is how should the container storage be setup for best performance if it uses network storage?
Multiple hosts running various LXC containers on them are connected to a central NAS which provides all storage access to them via NFS.
In this case the NAS server is using ZFS internally to manage all it’s storage, but the hosts that connect to it are using NFS to access images and container generated content.
So naturally if you setup a host and initialize LXD you need to setup the containers to use a storage system.
If you use ZFS from this perspective, it would mean that you are using a ZFS which is running over NFS which is then connected to a server running ZFS.
Surely this design is not ideal for performance due to all the overhead and nested filesystems, and using directory based storage on LXC is considered poor performance and not reccomended.
How do you manage shared storage over the network and still maintain good system performance overall and decrease overhead?
Here are some options that come to mind:
- ZFS over ZFS is not so bad and maybe it’s ok to use it like this in this scenario
- Don’t use ZFS on the storage server, but rather use a simpler filesystem like EXT4, and let the host LXC’s ZFS driver handle it
- Using directory based storage in this case is OK?
- Keep the images of the containers cached on the hosts, and have container generated content managed by mounting them using NFS