LXD 5.11 has been released

Why do you expose load averages here but nowhere else? Will they now be exposed as part of the /1.0/ or /1.0/resources? Is there going to be a nice way to get a summary like this for standalone hosts?

Load average and uptime could probably be added to the system part of the resources API. The rest of the data is already available and is just aggregated here as we needed it aggregated for the scriplet feature.

Technically this is also available on standalone systems if you hit GET /1.0/cluster/members/none/state

And every web dashboard developer got mildly excited at the prospect of aggregated data.

If we can save 2+ API calls by hitting one API it keeps dashboards competitive.

Do love a “technically” :smile:

LXD 5.11 is currently available to snap users in the latest/candidate channel and will be rolled out to stable early next week. Clients have been pushed to both HomeBrew and Chocolatey and are currently going through their publishing process there.

The usual release live stream will be at 2pm US eastern time on Monday:
https://www.youtube.com/watch?v=iMLiK1fX4I0

Regarding ZFS block mode, can it be enabled on an existing storage pool globally and used for new containers as default for root disk ?

I tried to add the volume.zfs.block_mode on my ZFS storage pool and when I want to create a container, I’ve got the following message :

lxc launch ubuntu/22.04 c1 -v
Creating c1
Error: Failed creating instance from image: Could not locate a zvol for zfsp1/containers/c1

The image used in the example is one of my custom image already present on the pool. When I looked on the ZFS pool, it seems an ZVOL ext4 image is present (corresponding to my custom image hash) :

zfs list
NAME                                                                                    USED  AVAIL     REFER  MOUNTPOINT
zfsp1                                                                                  1.08G   921G       96K  legacy
zfsp1/buckets                                                                            96K   921G       96K  legacy
zfsp1/containers                                                                        487M   921G       96K  legacy
zfsp1/containers/core_db                                                                263M  9.06G      571M  legacy
zfsp1/containers/core_ingress                                                           136M  9.18G      446M  legacy
zfsp1/containers/core_registry                                                         87.4M  9.23G      397M  legacy
zfsp1/custom                                                                             96K   921G       96K  legacy
zfsp1/deleted                                                                           328M   921G       96K  legacy
zfsp1/deleted/buckets                                                                    96K   921G       96K  legacy
zfsp1/deleted/containers                                                                 96K   921G       96K  legacy
zfsp1/deleted/custom                                                                     96K   921G       96K  legacy
zfsp1/deleted/images                                                                    328M   921G       96K  legacy
zfsp1/deleted/images/d153d54cc09a844c607b9f935365937b3c3fbb5188b301bb3d3524d5e8a42243   327M   921G      327M  legacy
zfsp1/deleted/virtual-machines                                                           96K   921G       96K  legacy
zfsp1/images                                                                            276M   921G       96K  legacy
zfsp1/images/d153d54cc09a844c607b9f935365937b3c3fbb5188b301bb3d3524d5e8a42243_ext4      276M   921G      276M  -
zfsp1/virtual-machines                                                                   96K   921G       96K  legacy

Did I miss something ? Thanks :slight_smile:

1 Like

@monstermunchkin please can you advise on this one? Thanks

I recreated my storage pool from scratch (through LXD) and it seemed to work as expected at first :

root@c1:~# cat /proc/mounts
/dev/zvol/zfsp1/containers/c1 / ext4 rw,relatime,idmapped,discard,stripe=2 0 0

I tried to enable/disable the option in the storage pool configuration and it worked as expected, I can have classic datasets or ZVOL for containers side-by-side.

So I dig a bit and the error showed up only when I launched a container without specifying the storage pool I want to use in the CLI :

$ lxc launch ubuntu/22.04 c1 -s zfsp1
Creating c1
Starting c1

$ lxc shell c1
root@c1:~# cat /proc/mounts
/dev/zvol/zfsp1/containers/c1 / ext4 rw,relatime,idmapped,discard,stripe=8 0 0
root@c1:~# df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/zvol/zfsp1/containers/c1  9.8G  533M  8.8G   6% /

$ lxc launch ubuntu/22.04 c2
Creating c2
Error: Failed creating instance from image: Could not locate a zvol for zfsp1/containers/c2

I usually rely on profile with the following configuration for containers :

config:
  limits.kernel.nofile: "65535"
  limits.processes: "1024"
  security.devlxd: "false"
  security.idmap.isolated: "true"
  security.syscalls.intercept.sysinfo: "true"
description: Default profile for containers
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: zfsp1
    size: 10GB
    type: disk
name: default
used_by:
- /1.0/instances/c1

The culprit seems to be related to the size attribute of the root disk, once I delete it, I can launch the container without any problems. I tried to play a bit with and once I tried to re-enable the size/quota I’ve got this error :

Config parsing error: The following instances failed to update (profile change still saved):
 - Project: default, Instance: c2: Failed to update device "root": Failed to run: zfs set quota=none zfsp1/containers/c2: exit status 1 (cannot set property for 'zfsp1/containers/c2': 'quota' does not apply to datasets of this type)

I tried to change the quota on CLI, same error :

lxc config device override c2 root size=20GB
Error: Failed to update device "root": Failed to run: zfs set quota=none zfsp1/containers/c2: exit status 1 (cannot set property for 'zfsp1/containers/c2': 'quota' does not apply to datasets of this type)

Something seems to be borked with quota and ZVOL (don’t see why because it seems to work nicely with VM).

I’m taking a look at this now and have confirmed the issues:

lxc launch images:ubuntu/jammy c2 -s zfs -d root,size=12GiB
Creating c2
Error: Failed instance creation: Failed creating instance from image: Could not locate a zvol for zfs/containers/c2
lxc launch images:ubuntu/jammy c2 -s zfs 
Creating c2
Starting c2
lxc launch images:ubuntu/jammy c3 -s zfs -d root,size=12GiB
Creating c3
Error: Failed instance creation: Failed creating instance from image: Could not locate a zvol for zfs/containers/c3

lxc launch images:ubuntu/jammy c3 -s zfs -d root,size=10GiB
lxc config device set c3 root size=12GiB
Error: Failed to update device "root": Failed to run: zfs set quota=none zfs/containers/c3: exit status 1 (cannot set property for 'zfs/containers/c3': 'quota' does not apply to datasets of this type)

So looks like there is a general problem with sizing/resizing those types of volumes.

1 Like

Thanks @tomp ! Should I open an issue on GitHub about this issue ?

Yes please, over at https://github.com/lxc/lxd/issues thanks

This should fix it

1 Like

Done : https://github.com/lxc/lxd/issues/11396

Can confirm this works better now when using LXD 5.12 (CT creation using defined size but also resize afterwards) ! Thanks for your actions :slightly_smiling_face:

1 Like

Hi,

I think that a documentation about how to use zvol is necessary.
If I create a new storage, with the block_mode parameters, I can launch a CT in zvol using this storage.

But if a create a volume with this parameters in an existing storage, how can I launch a CT in this volume ?

Thanks

You can’t.

Currently the two options are:

  • Set the config key on the pool, then create instances and custom volumes with it enabled
  • Create custom volumes only with it set

Custom volumes cannot be used as instance storage, they’re just additional shared storage you attach to instances. And there’s no way to tell LXD to enable ZFS block mode just for the one instance you’re creating right now.

We have a plan to add config options on the disk device down the line to allow that, but it’s not there yet.

I understand.

But If I change a current pool to use zvol, how it affects actual dataset containers in this storage ? And new containers will be exclusively zvol ?

Regards

Update to documentation: I created an additional storage using an existent zfs pool/dataset and set the config for zvol to this.

lxc storage create zvol zfs source=rpool/zvol volume.zfs.block_mode=yes size="20GB" volume.block.filesystem="xfs" volume.block.mount_options="usrquota"

lxc launch images:debian/11/amd64 debian11zvol -s zvol

Best Regards

This is the only place I found this information in. I think it should be somewhere in the documentation. I planned to use a zvol for a single instance, but now I’m not even sure if it’s possible. Does it mean that we can set this config key, create an instance with zvol and unset the key, to have a single zvol instance? Or do we have to commit to having all future instances zvoled? How does it affect the existing instances? There are so many questions about this feature that is nowhere to be found. Would be great if someone could answer them. :slight_smile:

There’s some more info in the spec:

And Linux Containers - LXD - Has been moved to Canonical

At the moment its only enabled at the pool level for new instances, or on a per-custom volume basis.

We will be adding the ability to control it on a per instance basis soon so keep an eye out for that in future releases.

1 Like

You can do that to a single instance, creating a zvol storage in a existing dataset like I described in a message above:

lxc storage create zvol zfs source=rpool/zvol volume.zfs.block_mode=yes size="20GB" volume.block.filesystem="xfs" volume.block.mount_options="usrquota"

lxc launch images:debian/11/amd64 debian11zvol -s zvol
1 Like