For one of my main servers I’ve set the lxd default pool to a ssd mirror where this default pool is it’s own ZFS dataset. Having purchased the ssd’s for this pool a little less than a year ago, and truthfully they were also utilized on a previous Truenas build that didn’t work out well I am now realizing:
I purchased the wrong type of SSD for the application I am using them in (should have gotten enterprise drives)
I need to pay attention to SSD wear and write amplification (The reason for this post)
So this comes to the question: What are the recommended ZFS Pool settings for LXD to allow for good performance while minimizing wear on the SSD’s??
Looking around a little, most people when discussing ZFS, SSD’s and applications/VM’s are suggesting the below tweaks:
Recordsize - set this to somewhere between 16k and 64k for most applications and especially VM’s
Logbias - set this to “throughput”
Compression - set this to LZ4
Are there any other guidelines/settings we should be ensuring are set when utilizing a ZFS dataset as the default pool?
Here is the iostat of my zfs mirror that contains the LXD dataset:
Hi @ZeroGravitas ,
I’m no expert of zfs filesystem but you can consider those parameters as well.
zfs set atime=off <pool_name>
if you have enough ram you can change the parameter zfs_arc_max.
And found out that information for zfs.
When creating a pool, use disks with the same blocksize. A correlation between zfs “blocksize” and the disk blocksize is the ashift parameter (which cannot be modified after the pool creation). Either the value is set to “0” for auto blocksize recognition, or manually set to either ashift=12 for 4k blocksize disks or ashift=9 for 512b blocksize disks. Mixing disks with different blocksizes in the same pool can lead to caveats like performance leaks or inefficient space utilization.
On a side note, for my homelab and various personal projects I’ve purchased over a dozen used enterprise SSDs off ebay over the last 4-5 years. Aside from 3 Samsung, the rest were all Intel. Various SATA and NVMe with sizes from 60G to 2T.
Wrote the seller and confirmed SMART data was good (look for drives with less than 3% wear and write/read are minimal) before purchase. So far, not one failure and I do have a couple drives that serve a half dozen busy Proxmox LXC containers that previously beat up consumer SSDs.
Was super wary at first, because who in their right mind buys used drives, but I continue to be surprised that all the drives are still in good shape and I’ll likely will never buy another consumer SSD again.
Thanks both for the input. I successfully transitioned the pool to use the new enterprise drives and things seem much better with the new zfs properties set here as discussed above.
And yeah, I may be looking into used enterprise ssd’s as well from now on. The write endurance is simply so much higher that it’s well worth it.
Any best brands to look for now that Intel sold their SSD division?? I was thinking Micron or Samsung may be the best bets.