Switching from quota to refquota

I have a zfs backend and want to switch from using quotas to refquotas for root volumes. I did lxc storage set mycluster:myvolume volume.zfs.use_refquota=true, but this is valid only for new containers.

How do I go about switching already existing ones? via zfs directly or using lxc config device override? Most of my containers are inheriting the size from the default volume size, what are the implications from overriding them all vs zfs set? does anything else happen internally when overriding the size via lxc other than zfs set refquota? any implications during backup/restore?

Does lxc storage volume set mycluster:myvolume zfs.use_requota=true work?

Basically volume.zfs.use_refquota applies to a storage pool and gets copied as zfs.use_refquota to new volumes when they are created, but you should be able to directly apply zfs.use_refquota to existing volumes.

yeah, that would be ideal, but nope, it says

# lxc storage volume list ee | fgrep aeeint
| container            | aeeint                                                           |             | filesystem   | 1       | lxd13    |
# lxc storage volume set ee aeeint zfs.use_refquota=true --target lxd13
Error: Storage pool volume not found

I guess it only works on “real” volumes, not on container roots

Try:

lxc storage volume set ee container/aeeint zfs.use_refquota=true --target lxd13
1 Like
[root@lxd13 ~]# zfs get quota,refquota lxd13/lxd/containers/aeeint
NAME                         PROPERTY  VALUE     SOURCE
lxd13/lxd/containers/aeeint  quota     none      local
lxd13/lxd/containers/aeeint  refquota  15G       local

Yup, that worked perfectly, thanks. --target is actually not necessary, unlike custom volumes, cluster seems to know on which member is the container volume.

Unlike other commands, i.e. info, who not only not seem to know it, but also refusing to honour --target

$ lxc storage volume info ee container/aeeint
Error: Failed to run: zfs get -H -p -o value referenced lxd10/lxd/containers/aeeint: cannot open 'lxd10/lxd/containers/aeeint': dataset does not exist
~$ lxc storage volume info ee container/aeeint --target lxd12
Error: Failed to run: zfs get -H -p -o value referenced lxd10/lxd/containers/aeeint: cannot open 'lxd10/lxd/containers/aeeint': dataset does not exist

Ah yeah, that makes sense, --target isn’t relevant for instances as they can only ever be on one server anyway.

I’ve sent Bugfixes by stgraber · Pull Request #10257 · lxc/lxd · GitHub which contains a fix for that

one more :wink:

–project doesn’t seem to be honored by volume set:

~$ lxc storage volume list ee --project test |fgrep container | head -1
| container | clp                                                              |             | filesystem   | 1       | lxd2     |
~$ lxc storage volume set ee container/clp zfs.use_refquota=true --project test
Error: Storage pool volume not found
Exit Code: 1
~

Not having any luck reproducing it here:

root@v1:~# lxc cluster list
+------+-----------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| NAME |             URL             |      ROLES       | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE  |      MESSAGE      |
+------+-----------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| v1   | https://172.17.250.50:8443  | database-leader  | x86_64       | default        |             | ONLINE | Fully operational |
|      |                             | database         |              |                |             |        |                   |
+------+-----------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| v2   | https://172.17.250.117:8443 | database-standby | x86_64       | default        |             | ONLINE | Fully operational |
+------+-----------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
root@v1:~# lxc project create foo
Project foo created
root@v1:~# lxc profile show default | lxc profile edit default --project foo
root@v1:~# lxc launch images:alpine/edge a1 --project foo
Creating a1
Starting a1                                 
root@v1:~# lxc storage volume set local container/a1 zfs.use_refquota=true --project foo
root@v1:~# 

Tried the lxc storage volume set from both cluster servers too in case it’s forwarding related.

exactly the same sequence of commands fails for me

[root@lxd1 ~]# lxc profile show default | lxc profile edit default --project foo
[root@lxd1 ~]# lxc launch images:alpine/edge a1 --project foo
Creating a1
Starting a1
[root@lxd1 ~]# lxc storage volume set local container/a1 zfs.use_refquota=true --project foo
Error: Storage pool not found

my lxd is from snap, maybe you run something else? do you have a cluster? Or have something in storage options or a profile? mine are pretty basic

[root@lxd1 ~]# snap list lxd
Name  Version        Rev    Tracking       Publisher   Notes
lxd   5.0.0-e478009  22894  latest/stable  canonicalâś“  in-cohort
[root@lxd1 ~]# lxc profile show default --project foo
config:
  limits.cpu: "8"
  limits.memory: 8192MB
  limits.processes: "17384"
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: ee
    type: disk
name: default
used_by:
- /1.0/instances/a1?project=foo
[root@lxd1 ~]# lxc storage show ee
config:
  volume.size: 15GiB
  volume.zfs.use_refquota: "true"
description: ""
name: ee
driver: zfs
used_by:
- /1.0/images/2298ce0722c2011ec1ff365aa4e056ef01fa67cbdc8f7bceb12fecc064d35b77?target=lxd1
...
status: Created
locations:
- lxd2
- lxd1
...