LXD Disk I/O Limits Not Applying

I am trying to limit disk operations, but the limits do not seem to be applying. I am using btrfs and have been referring to these instructions.

Steps to reproduce:

$ lxc storage create limited btrfs size=4GB
$ lxc profile copy default limited
$ lxc profile device set limited root limits.max 1MB
$ lxc profile device set limited root limits.read 1MB
$ lxc profile device set limited root limits.write 1MB
$ lxc profile device set limited root pool limited
$ lxc profile show limited
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    limits.max: 1MB
    limits.read: 1MB
    limits.write: 1MB
    path: /
    pool: limited
    type: disk
name: limited
used_by: []
$ lxc launch --profile=limited ubuntu: limited
$ lxc exec limited bash
root@limited:~# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB, 80 MiB) copied, 0.0600113 s, 1.4 GB/s

As you can see, the disk is being written to at 1.4 GB/s when I had set the limit to be 1 MB.

Any thoughts on why the limit is not being applied? Thanks in advance for any suggestions!

(Edit: I had not set the profile to use the “limited” pool, but I still experience the same problem after fixing this. I have updated the commands I used above to reflect this change.)

Hi @nicolas, following our discussion on IRC, if you write zeros to your disk it causes the caching to ‘affect’ the performance hence you see those numbers.

Hey, I’m having similar issue. I’ve been trying to apply IO read/write limits on my container, but they don’t seem to take any effect on zfs nor on dir backends. It worked for me on btrfs and lvm, but are easyli bypassed by running commands in docker or lxd child-container. Here’s my setup:

lxc profile delete default
lxc storage delete default
#either of following lines
#lxc storage create default zfs source=/dev/sdb
#lxc storage create default btrfs source=/dev/sdb
#lxc storage create default lvm source=/dev/sdb
#lxc storage create default dir source=/data/lxd #/dev/sdb1 mounted on /data
lxc profile device add default root disk path=/ pool=default
lxc launch ubuntu: test
lxc config set test security.nesting true
lxc config device add test root disk pool=default path=/
lxc config device set test root limits.write 1MB
lxc exec test bash

As for measurement I’ve been looking at output of “iostat -d sdb 1” command on the host. For generating the IO I’ve been planning to use fio, but running “apt update -y && apt install -y fio” alone proved that the limits worked only on btrfs and lvm.

Am I out of luck or are there any options I didn’t tried, or am I doing something wrong perhaps?

Hello ,
It seems there is exactly same issue with ZFS too.
IO limits can applied, but it doesn’t suppose to work.
Did anyone have them working as expected?
Thanks in advance!

There are unfortunately quite a few cases where I/O limits don’t apply.

ZFS ignores most of the blkio limits as the ZFS kernel driver uses its own I/O codepath bypassing them


You also cannot apply I/O limits on NVME drives as those don’t go through a kernel side I/O scheduler, instead having the hardware itself handle the queuing


The other backends should generally behave a bit better though then again, you may get into weird cases if you use btrfs with multiple drives or LVM with multiple PVs, some block devices will accept limits, some won’t and some will only apply them in some cases


I suspect the most reliable is probably good old ext4 as that’s what the various kernel developers most likely tested with, again, so long as you’re using an I/O scheduler which supports limits (BFQ/CFQ). The use of scsi multiqueue may be part of what’s getting in the way from accessing some of those schedulers.

1 Like