Looking for clarity on disk I/O limits with ZFS

I’m not sure of what code paths may be properly restricted for zfs, it’s always a bit unknown what bits of normal infrastructure is used by zfs/spl and what bits are using their own implementation.

btrfs is usually our second recommendation (and in fact, the most used backend), though stability in raid-ed environments has been an issue in the past and disk quotas are effectively useless, so may not be suitable for everyone.

Using LVM would come with a dedicated block and filesystem for each container which should avoid all those issues, though at the cost of snapshot reliability and length exports/moves.

Another thing of note, if your SSDs are NVME, then none of that matters as Linux just plain doesn’t support I/O restrictions on those these days. They don’t go through the I/O scheduler and instead the queuing happens on the drives, outside of OS control.

Another reason why you may not be able to control block I/O in your environment is because of your choice of I/O scheduler, not all of them support limits and last I checked only cfq did a good job at enforcing them.

2 Likes