User quota support on ext4/lvm root volume

I’m trying to get user quota working on a ext4/lvm root volume inside a container.

Is this supported? can’t find much information on it.

What I have done till now;

lxc storage volume set container1 container/container1 block.mount_options=usrquota,grpquota

after that, the quota tools complain they can’t access the block device.

When exposing the block device with;

lxc config device add container1 container/container1 unix-block path=/dev/container1/containers_container1

it seems to work, but it seems to collect data on un-shifted uid’s

 repquota -a
*** Report for user quotas on device /dev/container1/containers_container1
Block grace time: 7days; Inode grace time: 7days
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --      44       0       0              7     0     0       
#1000000  -- 2578924       0       0          65058     0     0       
#1000997  --       4       0       0              1     0     0       
#1000998  --       4       0       0              1     0     0       

So, wondering if it’s just not supported or if I’m doing it wrong.

Kind regards, J

Looks like this isn’t supported by the ext4 kernel driver.

So, it is supported… just not with the kernel I run? I run 5.10.179-168.710.amzn2.x86_64.

Or ext4 user quotas are not supported by LXD/LXC at all?

Is there any filesystem on top of a lvm volume that supports user quotas used as the container root volume?

No, what I meant is that user quota reporting when inside of a user namespace just doesn’t appear to be properly supported in the Linux kernel. I don’t recall seeing any recent changes which would change that, though testing a recent 6.5 kernel would probably be a good idea.

Most likely the ioctl used to retrieve the quotas just returns uid/gid straight from the filesystem itself, appearing unshifted as you’re seeing.

Though note that if you run a much more recent kernel (6.1 or higher I’d say), then LXD/Incus will be able to use VFS idmap instead of manual uid/gid shifting, this would likely result in the quota data to line up too.

1 Like

Thank you for your feedback.

Will have a look at VFS idmap. Will it handle/convert a container filesystem which already has shifted uid’s?

An other option for now would be to run the container without shifted uid’s?

option:
LXD_SHIFTFS_DISABLE=1 in your lxd daemon’s environment to disable shiftfs
Not sure if I can disable it per container, so I could sync to a new container with shiftfs disabled.

Not sure if/how I can revert the shifted sate of the exiting filesystem.

LXD_SHIFTFS_DISABLE=1 would actually do the opposite of what you want.
shiftfs and now VFS idmap allow for in-kernel shifting of uid/gid rather than having them re-written on the filesystem.

As for converting an existing container, it can be done with this trick:

  • incus stop NAME
  • incus set NAME security.privileged=true
  • incus start NAME
  • incus stop NAME
  • incus unset NAME security.privileged
  • incus start NAME

(For LXD, replace incus with lxc)

1 Like

@stgraber your feedback was very usefull.

Got it to work in two scenarios:

  • kernel 5.10 with a privileged container
  • kernel 6.1 with an unprivileged container

Only thing that still bothers me is having to expose the root device;

lxc config device add container1 container/container1 unix-block path=/dev/container1/containers_container1

Is there a nicer way to do this?