Limiting disk IO on LXD containers

I have tried without success to limit the disk IO of a LXD container named ci with:

lxc config device set ci root limits.read  30MB
lxc config device set ci root limits.write 10MB

However, when running e.g. this command on the container:

dd if=/dev/zero of=/root/testfile bs=1G count=10 oflag=direct

the result is the whole capicity of the disk, which is about 130MB/s, instead of the expected result of about 10MB/s for writing operations:

10737418240 bytes (11 GB, 10 GiB) copied, 81,3877 s, 132 MB/s

This is also confirmed by atop running on the host.

How can I effectively limit the disk IO of the container, so that no matter what happens on it, the host disk performance is not overcommitted?

On a second test, I run the same dd command as above at the same time on the guest and on the host, but the host is not prioritized either:

Some additional information:

  • Host and guest are Ubuntu 16.04

  • The server has two hard disks with equal partitions, joined in RAID 1 arrays

  • On top of the biggest RAID array the root filesystem is mounted on a LVM volume group

      root@server ~ # lvs
      LV   VG   Attr       LSize
      root vg0  -wi-ao---- 2,72t
      swap vg0  -wi-ao---- 4,00g
    
  • The LXD storage backend is dir

I could partitition the disk differently if that could help limiting disk IO of the container, e.g. having a RAID array dedicated to ZFS for LXD. Would that help?

I have posted this question also on https://serverfault.com/questions/858873/limiting-disk-io-on-lxd-containers

At LXD 2.0: Resource control [4/12] | Stéphane Graber's website it says (regarding disk limits):

This means that right now LXD only supports disk limits if you’re using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.

Thanks a lot @simos for your reply. I read that, but I understood it referencing disk size quotas, not disk io quotas.

I have tested it on btrfs storage backend and it indeed works on that backend.

1 Like

My understanding is that disk quotas and disk I/O are treated in the same way, therefore both require ZFS or btrfs storage backends.

They’re not. Disk quotas only really work with btrfs and zfs but I/O quotas work with anything that’s backed by a block device. There are however a lot of restrictions on the block I/O limits due to the way those work in the kernel.

I’ve documented a number of those here:

@stgraber IIUC is not possible to io-limit a software RAID, as it is a virtual device, is it?

In that case, it is very unfortunate to not be possible to create io-limited containers on redundant RAID 1 arrays to protect them from disk failures, which sooner or later always happen. Many hosters have only servers with 2 disks supposed to be used in RAID 1 arrays.

Yeah, that’s correct. We could add code to track down parent devices for a mdadm managed RAID, though I can’t guarantee that would lead to particularly useful limits since we can only apply them to a whole device, not at the partition level and not at the filesystem level…

Feel free to open an issue at https://github.com/lxc/lxd/issues to have us add extra code to track down backing devices for mdadm RAID. It “should” work mostly okay if you’re using that with the dir backend, though the same limit would be applied to all underlying devices which may allow for more throughput than the limit says. If combining that with ZFS, btrfs or LVM, then that’s an extra layer of indirection which makes the limit even more fuzzy.

Added issue on github: https://github.com/lxc/lxd/issues/3515

I was using the version of lxd available on the default repositories of Ubuntu 16.04, which was 10.0.9. After upgrading to 10.15 with

sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable 
sudo apt-get update
sudo apt-get install lxd

the limit works on my setup, even without the need to restart either the host nor the guest.

1 Like