As you can see, the disk is being written to at 1.4 GB/s when I had set the limit to be 1 MB.
Any thoughts on why the limit is not being applied? Thanks in advance for any suggestions!
(Edit: I had not set the profile to use the âlimitedâ pool, but I still experience the same problem after fixing this. I have updated the commands I used above to reflect this change.)
Hi @nicolas, following our discussion on IRC, if you write zeros to your disk it causes the caching to âaffectâ the performance hence you see those numbers.
Hey, Iâm having similar issue. Iâve been trying to apply IO read/write limits on my container, but they donât seem to take any effect on zfs nor on dir backends. It worked for me on btrfs and lvm, but are easyli bypassed by running commands in docker or lxd child-container. Hereâs my setup:
lxc profile delete default
lxc storage delete default
#either of following lines
#lxc storage create default zfs source=/dev/sdb
#lxc storage create default btrfs source=/dev/sdb
#lxc storage create default lvm source=/dev/sdb
#lxc storage create default dir source=/data/lxd #/dev/sdb1 mounted on /data
lxc profile device add default root disk path=/ pool=default
lxc launch ubuntu: test
lxc config set test security.nesting true
lxc config device add test root disk pool=default path=/
lxc config device set test root limits.write 1MB
lxc exec test bash
As for measurement Iâve been looking at output of âiostat -d sdb 1â command on the host. For generating the IO Iâve been planning to use fio, but running âapt update -y && apt install -y fioâ alone proved that the limits worked only on btrfs and lvm.
Am I out of luck or are there any options I didnât tried, or am I doing something wrong perhaps?
Hello ,
It seems there is exactly same issue with ZFS too.
IO limits can applied, but it doesnât suppose to work.
Did anyone have them working as expected?
Thanks in advance!
There are unfortunately quite a few cases where I/O limits donât apply.
ZFS ignores most of the blkio limits as the ZFS kernel driver uses its own I/O codepath bypassing themâŠ
You also cannot apply I/O limits on NVME drives as those donât go through a kernel side I/O scheduler, instead having the hardware itself handle the queuingâŠ
The other backends should generally behave a bit better though then again, you may get into weird cases if you use btrfs with multiple drives or LVM with multiple PVs, some block devices will accept limits, some wonât and some will only apply them in some casesâŠ
I suspect the most reliable is probably good old ext4 as thatâs what the various kernel developers most likely tested with, again, so long as youâre using an I/O scheduler which supports limits (BFQ/CFQ). The use of scsi multiqueue may be part of whatâs getting in the way from accessing some of those schedulers.