I could partitition the disk differently if that could help limiting disk IO of the container, e.g. having a RAID array dedicated to ZFS for LXD. Would that help?
This means that right now LXD only supports disk limitsif you’re using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.
They’re not. Disk quotas only really work with btrfs and zfs but I/O quotas work with anything that’s backed by a block device. There are however a lot of restrictions on the block I/O limits due to the way those work in the kernel.
@stgraber IIUC is not possible to io-limit a software RAID, as it is a virtual device, is it?
In that case, it is very unfortunate to not be possible to create io-limited containers on redundant RAID 1 arrays to protect them from disk failures, which sooner or later always happen. Many hosters have only servers with 2 disks supposed to be used in RAID 1 arrays.
Yeah, that’s correct. We could add code to track down parent devices for a mdadm managed RAID, though I can’t guarantee that would lead to particularly useful limits since we can only apply them to a whole device, not at the partition level and not at the filesystem level…
Feel free to open an issue at https://github.com/lxc/lxd/issues to have us add extra code to track down backing devices for mdadm RAID. It “should” work mostly okay if you’re using that with the dir backend, though the same limit would be applied to all underlying devices which may allow for more throughput than the limit says. If combining that with ZFS, btrfs or LVM, then that’s an extra layer of indirection which makes the limit even more fuzzy.