LXD 5.11 has been released


The LXD team is very excited to announce the release of LXD 5.11!

This is a pretty packed release with a couple of big highlight features, specifically the instance placement scriptlet and ZFS zvol support but also included are quite a few other smaller features, performance improvements and bugfixes.


New features and highlights

Instance placement scriptlet

A common request from cluster users has been to provide a better alternative to LXD’s default placement algorithm.

By default LXD will quite simply look at all suitable cluster members and select whichever is currently hosting the fewest instances. It doesn’t care about whether they’re containers or VMs, whether they’re currently running or not, how much resources the same may have and how much of it may be in use at the time.

As there are about as many opinions on placement as there are clusters out there, we decided to go with something that’s both very flexible, yet very fast and reliable.
Instead of a potentially pretty complex internal scheduler which may not fit everyone’s needs or an external scheduler which can go down, we went with something that’s new for LXD.

Enter scriptlets. Scriptlets as the name may suggest are little scripts.
In LXD’s case, we’re using Starlark scripts specifically which is a very limited dialect of Python. Those scripts are directly loaded into the application, in this case LXD and can be provided with some data and functions to call.

Pretty importantly, those scriptlets cannot access any local data, hit the network or even perform complex time consuming actions (no loops and the like).

In LXD’s case, we’re introducing a new config key, instances.placement.scriptlet which can be set to a Starlark scriptlet. Then whenever LXD needs to place an instance, it will call that scriptlet, provide it with a list of candidate cluster members, the instance creation request itself and expose some functions that can be used to retrieve additional information about cluster members.

Specification: [LXD] Scriptlet based instance placement scheduler
Documentation: https://linuxcontainers.org/lxd/docs/latest/explanation/clustering/#instance-placement-scriptlet

Block storage mode on ZFS pools

Something that was requested over 5 years ago now but recently caught our attention thanks to Docker and Android is the interest for using ZFS as a storage backend but not store data within ZFS datasets but rather use ZFS volumes.

This leads to an experience that’s very similar to LVM or Ceph but on the very capable ZFS backend. It can also be used to mix and match, having some specific custom volumes use zvol while the rest of the data uses datasets.

This is interesting for both Android containers and for Docker storage as in both cases, ZFS isn’t compatible with the way they expect to store data. With this feature it’s now possible to flip the storage mode to zvol for Android containers, getting them running on the expected ext4 filesystem, and for Docker, one can now carve out a custom storage volume that’s formatted as ext4 for use as /var/lib/docker, allowing for the use of the much faster overlay2 vfs driver.

Specification: [LXD] ZFS block mode
Documentation: https://linuxcontainers.org/lxd/docs/latest/reference/storage_zfs/#storage-volume-configuration

lxc cluster info command

As part of the placement scriptlet work described above, one thing we needed was for an easy way to retrieve load information about a cluster member.

Rather than only having this available as an internal function, we decided to also make it available over the API and provide a CLI command for it.

stgraber@dakara:~$ lxc cluster info s-shf-cluster:athos
  uptime: 30219
  - 12.13
  - 9.77
  - 9.62
  total_ram: 676309450752
  free_ram: 319631790080
  shared_ram: 123550392320
  buffered_ram: 32219648000
  total_swap: 8589926400
  free_swap: 8589926400
  processes: 4330
      used: 23882062033615
      total: 30893314601679
      used: 2452640672159
      total: 10217284610463

It provides pretty raw data about the system load, memory and disk usage and number of processes.

Support for attaching managed physical networks to instances

physical managed LXD networks were first introduced to serve as uplink for OVN networks. But seeing how LXD also supports attaching to physical network interfaces, there was no good reason not to allow their use through the network property.

Documentation: https://linuxcontainers.org/lxd/docs/latest/reference/devices_nic/#nic-physical

Reworked image documentation

Continuing with our documentation rework, in this release we now have the images documentation re-organized to match the rest of our docs.


Complete changelog

Here is a complete list of all changes in this release:

Full commit list
  • test: Fix race against snapshot prune task in test_storage_volume_snapshots
  • lxd/db/storage/pools: Adds GetStoragePools in the style of GetNetworkForwards
  • lxd/db/storage/pools: Update GetStoragePoolInAnyState and GetStoragePool to use GetStoragePools
  • lxd/cluster/resolve: Update ResolveTarget to avoid starting transaction if target is local server
  • lxd/state/state: Adds StartTime to State
  • lxd/response: Updates forwardedResponseToNode and forwardedResponseIfTargetIsRemote to accept state
  • lxd/api/1.0: forwardedResponseIfTargetIsRemote usage
  • lxd/api/metrics: forwardedResponseIfTargetIsRemote usage
  • lxd/api/cluster: clusterNodeStatePost usage of forwardedResponseToNode
  • lxd/network/forwards: forwardedResponseIfTargetIsRemote usage
  • lxd/network/load/balancers: forwardedResponseIfTargetIsRemote usage
  • lxd/network/peer: forwardedResponseIfTargetIsRemote usage
  • lxd/networks: forwardedResponseIfTargetIsRemote usage
  • lxd/resources: forwardedResponseIfTargetIsRemote usage
  • lxd/storage/buckets: forwardedResponseIfTargetIsRemote usage
  • lxd/storage/pools: forwardedResponseIfTargetIsRemote usage
  • lxd/storage/volumes: forwardedResponseIfTargetIsRemote usage
  • lxd/storage/volumes/backup: forwardedResponseIfTargetIsRemote usage
  • lxd/storage/volumes/snapshots: forwardedResponseIfTargetIsRemote usage
  • lxd/acme: Reduce calls to d.State()
  • lxd/api: Reduce calls to d.State()
  • lxd/api/1.0: Reduce calls to d.State()
  • lxd/api/cluster: Reduce calls to d.State()
  • lxd/api/internal: Reduce calls to d.State()
  • lxd/api/internal/recover: Reduce calls to d.State()
  • lxd/api/metrics: Reduce calls to d.State()
  • lxd/api/project: Reduce calls to d.State()
  • lxd/api/vsock: Reduce calls to d.State()
  • lxd/backup: Reduce calls to d.State()
  • lxd/certificates: Reduce calls to d.State()
  • lxd/daemon: Adds StartTime to State
  • lxd/daemon: Reduce calls to d.State()
  • lxd/daemon/images: Decouple imageOperationLock and ImageDownload from daemon
  • lxd/devlxd: Assign d.State() to own variable
  • lxd/events: Assign d.State() to own variable
  • lxd: Update image functions to accept state.State rather than Daemon
  • lxd/network/forwards: Reduce calls to d.State()
  • lxd/network/load/balancers: Reduce calls to d.State()
  • lxd/network/peer: Reduce calls to d.State()
  • lxd/networks: Reduce calls to d.State()
  • lxd/resources: Reduce calls to d.State()
  • lxd/storage/buckets: Reduce calls to d.State()
  • lxd/storage/pools: Reduce calls to d.State()
  • lxd/storage/volume: Reduce calls to d.State()
  • lxd/storage/volumes/backups: Reduce calls to d.State()
  • lxd/storage/volumes/snapshot: Reduce calls to d.State()
  • doc/ref/devices_nic: Add queue.tx.length
  • lxd: liblxc.HasAPIExtension usage
  • lxd/instance/drivers/driver/lxc: Simplify Shutdown timeout handling
  • api: Adds cluster_member_state extension
  • shared/api: Adds StoragePoolState, ClusterMemberSysInfo and ClusterMemberState struct types
  • lxd/db/storage/pools: Exports StoragePoolCreated and StoragePoolPending constants
  • lxd/db: Storage pool state constant usage
  • lxd/storage/pool/load: Adds LoadByRecord and updates LoadByName to use it
  • lxd/cluster/member/state: Adds MemberState function
  • lxd/api/cluster: Adds clusterNodeStateGet handler
  • client: Adds GetClusterMemberState function
  • doc/rest-api: Refresh swagger YAML
  • lxc/cluster: Adds lxc cluster <member> info command
  • i18n: Update translation templates
  • test: Add basic test for lxc cluster info <member> command
  • lxd/db/images: Use explicit type for each argument in getNodesByImageFingerprint
  • lxd/db/instances: Remove use of c.GetNodeOfflineThreshold in GetProjectAndInstanceNamesByNodeAddress
  • lxd/instances/get: tx.GetProjectAndInstanceNamesByNodeAddress usage
  • lxd/operations: Remove use of tx.GetNodeOfflineThreshold
  • lxd/instances/get: Pass request context to transaction in doInstancesGet
  • lxd/instances/get: Don’t pass daemon to doInstancesGet
  • lxd/api: Auto-forward to the UI if accessed from browser
  • lxd/api: Handle UI sub-paths
  • lxd/instances_post: Fix copying profiles during instance copy
  • lxc/migrate: Make live migration error message more helpful
  • doc/clustering: add link to YouTube video about cluster groups
  • Add mipsle and mips64le architecture aliases
  • lxd/cluster/member/state: Fix TotalRAM field in MemberState output
  • lxd/instance/drivers/driver/lxc: Comment typo and spacing
  • shared/util: Update RunCommandCLocale to pass LANGUAGE=en env var
  • doc/images: move out unrelated docs
  • test: Make unix device checks less flaky
  • lxd/api/cluster: Adds evacuateHostShutdownDefaultTimeout constant
  • lxd/api/cluster: Adds instance already stopped detection to restoreClusterMember
  • lxd/instance/drivers/driver/lxc: Switch log level to warn in networkState
  • lxd/instance/operationlock: Add support for never expiring timeouts
  • lxd/instance/drivers/driver/common: Adds more debug logging to onStopOperationSetup
  • lxd/instance/drivers/driver/lxc: Don’t wait in liblxc for container to stop in Shutdown
  • lxd/instance/drivers/driver/lxc: Extend operation lock in onStopNS and onStop
  • lxd/instance/drivers/driver/lxc: Don’t report intermediate state during ongoing operations
  • lxd/instance/drivers/driver/qemu: Avoid extra go routine to keep operationlock alive in Shutdown
  • lxd/instance/drivers/driver/qemu: Use constants in statusCode
  • lxc/action: Clarify --timeout and --force flags to lxc stop command
  • i18n: Update translation templates
  • drivers: Unify allowed filesystems variable
  • lxd/migration: Remove leftover stats definitions
  • doc/clustering: document lxc cluster info
  • lxd/instance/drivers/driver/qemu: Add QEMUDefaultCPUCores and export QEMUDefaultMemSize constants
  • lxd/storage: Exports DefaultBlockSize constant
  • lxd/api/cluster: Consistent import of github.com/lxc/lxd/lxd/instance/drivers
  • api: Adds instances_placement_scriptlet extension
  • gomod: Adds go.starlark.net/starlark
  • shared/api/scriptlet: Adds InstanceResources type
  • lxd/scriptlet/starlark: Adds StarlarkMarshal function
  • lxd/scriptlet: Adds instance placement scriptlet functions
  • lxd/cluster/config/config: Adds instances.placement.scriptlet global config option
  • lxd/cluster/config/config: Adds InstancesPlacementScriptlet function
  • lxd/api/1.0: Adds handler for loading scriptlet when instances.placement.scriptlet changes
  • lxd/daemon: Load instance placement scriptlet at start up
  • lxd/instances/post: Integrate instance placement scriptlet into instancesPost
  • lxd/api/cluster: Improve errors in evacuateClusterMember
  • lxd/api/cluster: Restructure evacuateClusterMember to accomodate future instance placement scriptlet integration
  • lxd/api/cluster: Integrate instance placement scriptlet into evacuateClusterMember
  • test: Adds instance placement scriptlet tests
  • doc: Rename clustering-assignment to clustering-instance-placement
  • doc: Adds instance placement scriptlet documentation
  • lxd/instance/drivers: Add location to instance creation event.
  • lxd/ip/link: Adds SetAllMulticast function
  • lxd/device/nic/macvlan: Enable all multicast processing on VM macvtap host interface
  • lxd/storage: Add location to local volume creation event.
  • lxd/storage/filesystem: Moves ResolveMountOptions to filesystem package
  • lxd/storage/drivers: filesystem.ResolveMountOptions usage
  • lxd/storage/drivers/utils: Removes unused resolveMountOptions function
  • lxd/device/disk: Fix incorrect splitting of raw.mount.options in createDevice
  • lxd/device/device/utils/disk: Removes readonly argument from DiskMount
  • lxd/device/device/utils/unix: DiskMount usage
  • lxd/instance/drivers/driver/qemu: device.DiskMount usage in Start
  • lxd/device/disk: DiskMount usage in createDevice
  • client: add TransportWrapper to ConnectionArgs
  • test: Update disk device tests to check for mount flag support in raw.mount.options
  • client: fix cast to http.Transport when using TransportWrapper
  • doc/server: bring back doc for net.core.bpf_jit_limit
  • doc/images: draft new structure for the images section
  • doc/images: move content
  • doc/metrics: greatly simplify the cluster’s scrape_configs
  • lxd/db/node: s/containers/instances/
  • lxd/db/node/test: s/containers/instances/
  • lxd/db/node/test: s/container/instance/ in comments
  • lxd/backup : update the error message when “backup/index.yaml” can’t be found
  • drivers: Add source indicator to Volume
  • storage: Add hasSource option to VolumeDBCreate
  • storage: Update VolumeDBCreate usage
  • lxd/storage/drivers/driver/btrfs/utils: Only check for minimum number of columns in btrfs qgroup show command
  • doc/storage: explain how optimized instance transfer works
  • doc/api: add related link to blog post about the LXD API
  • doc/index: give a better introduction to the online demo
  • doc/architectures: clean up architecture reference
  • lxd: allow IdmappedStorage() to detect filesystems
  • shared/api/scriptlet/instance: Adds InstancePlacement type and reason constants
  • lxd/scriptlet/instance/placement: Updates InstancePlacementRun to accept scriptlet.InstancePlacement argument
  • lxd/scriptlet/instance/placement: Removes unused constants
  • lxd/api/cluster: Avoid calling inst.Project() multiple times
  • lxd/api/cluster: Pass apiScriptlet.InstancePlacement to InstancePlacementRun
  • lxd/api/cluster: instProject.Name usage
  • lxd/instances/post: Pass apiScriptlet.InstancePlacement to InstancePlacementRun
  • test: Updates instance_placement tests for new function signature
  • doc: Updates instance placement scriptlet documentation for new placement function signature
  • lxd: Scriptlet API constant usage
  • lxd/cluster/gateway: Removes unimplemented Snapshot function
  • lxd/api/internal: Avoid calling d.gateway.Snapshot in internalRAFTSnapshot
  • lxd/scriptlet/instance/placement: Tweak error prefix when scriptlet fails
  • doc: use “entity” instead of “object” to avoid confusion
  • doc: use “directory” instead of “folder” for consistency reasons
  • drivers: Add Volume.Clone function
  • test: Updates instance placement tests to show usage of fail() call to end scriptlet with error
  • storage: Use Volume.Clone() in Pool.GetVolume()
  • test: Updates instance placement tests to check for config in embedded struct
  • lxd/scriptlet/starlark: Updates StarlarkMarshal to handle more starlark.Value types
  • lxd/scriptler/starlark: Update StarlarkMarshal to sort map keys
  • lxd/scriptlet/starlark: Update StarlarkMarshal to support anonymous non-struct fields
  • lxd/scriptlet/starlark/test: Adds tests for StarlarkMarshal function
  • doc/images: Clean up “About images”
  • lxd: hotplug mounts
  • Makefile: Immediately fail on lint errors
  • lxd: User friendly error for missing volume.
  • backend_lxd: Pass config when creating instance from backup
  • backend_lxd: Pass config when renaming instances and custom volumes
  • volume: Update IsBlockBacked
  • backend_lxd: Check vol.IsBlockBacked
  • lvm: Fix logging
  • ceph: Set mountFilesystemProbe to true in ListVolumes
  • lvm: Set mountFilesystemProbe to true in ListVolumes
  • storage: Drop call to Volume.SetMountFilesystemProbe
  • doc/images: Clean up “How to use remote images”
  • doc/images: Clean up “Remote image servers”
  • lxd/scriptlet/starlark: Updates StarlarkMarshal to handle multi-level embedded structs
  • client: Allow empty target if storage location is ‘none’.
  • lxc/storage_volume: Fixes --destination-server flag when copying between remotes.
  • i18n: Updates translations.
  • lxd/apparmor: fix AppArmor forkproxy profile
  • lxd/db: Remove needless semi-colons in queries
  • lxd/instance/drivers/driver/lxc: s/container/instance/
  • backend_lxd: Handle zvols in EnsureImage
  • zfs: Support block mode
  • test: Add zfs storage tests
  • lxc/migration: Update protobuf with zfs zvol support
  • lxd/migration: Add HeaderZvols feature
  • migration,storage: Check zvol support
  • lxd/scriptlet/starlark: Updates StarlarkMarshal to marshal structs to Starlark objects
  • lxd/scriptlet/starlark/test: Updates StarlarkMarshal tests to account for struct->object conversion
  • test: Updates test_clustering_instance_placement_scriptlet to account for struct->object conversion
  • doc: Updates instance placement scriptlet documentation to account for struct->object conversion
  • lxd/api/1.0: Improve errors
  • lxd/db/images: Removes unused GetExpiredImagesInProject function
  • lxd/images: Reworks pruneExpiredImages to not remove image files/volumes until image is expired in all projects
  • device: managed network support physical nic
  • doc: managed network support physical nic
  • test: managed network support physical nic
  • lxd/instance/drivers/driver/lxc: Remove erroneous empty log error
  • device/disk: Don’t use RBD for FS custom volumes
  • cgroup: properly set soft mem limit for cgroup-v2
  • test: Updates image expiry tests to check for proper project handling
  • instance: differentiate msg for creation/deletion of snapshots
  • lxd/storage/backend/lxd: Fixed incorrect logic in EnsureImage that was breaking BTRFS tests
  • lxd/storage/backend/lxd: Improve log message in EnsureImage
  • lxd/storage/backend/lxd: Don’t update image volume’s description to pool’s description in EnsureImage
  • btrfs: Restructure optimized refresh
  • btrfs: Set parent correctly for optimized refresh
  • lxd/db/db/cluster: Fixes incorrect UNIQUE index on name field of networks_zones_records
  • lxd/network/zones/records: Fix zone record management for non-default project zones
  • test: Updates dns zones tests to check for non-default project zone records
  • test: basic: skip seccomp tests if we are under seccomp filter
  • tests: syscall_interception: disable if seccomp filtering is active
  • doc/images: Clean up “Manage images”
  • doc/images: Clean up “Create images”
  • doc/images: Clean up “Copy and import images”
  • doc/images: Clean up “Associate profiles” section
  • doc/images: Clean up “Image format”
  • ceph: Check ceph config file for key
  • lxd/apparmor: Add microceph to ceph paths
  • lxd/db/network/forwards: Updates GetProjectNetworkForwardListenAddressesByUplink to handle forwards defined on uplink network itself
  • lxd/db/network/load/balancers: Updates GetProjectNetworkLoadBalancerListenAddressesByUplink to handle load balancers defined on the uplink network itself
  • lxd/network/driver/common: Adds shared getExternalSubnetInUse function
  • lxd/network/driver/bridge: Updates getExternalSubnetInUse to use common getExternalSubnetInUse function
  • lxd/network/driver/ovn: Updates getExternalSubnetInUse to use n.common.getExternalSubnetInUse
  • lxd/network/driver/ovn: Clarify address conflict errors
  • lxd/main_init_interactive: Remove trust passwords
  • gomod: Update dependencies
  • i18n: Update translations from weblate

Try it for yourself

This new LXD release is already available for you to try on our demo service.


The release tarballs can be found on our download page.

Binary builds are also available for:

  • Linux: snap install lxd
  • MacOS: brew install lxc
  • Windows: choco install lxc

Why do you expose load averages here but nowhere else? Will they now be exposed as part of the /1.0/ or /1.0/resources? Is there going to be a nice way to get a summary like this for standalone hosts?

Load average and uptime could probably be added to the system part of the resources API. The rest of the data is already available and is just aggregated here as we needed it aggregated for the scriplet feature.

Technically this is also available on standalone systems if you hit GET /1.0/cluster/members/none/state

And every web dashboard developer got mildly excited at the prospect of aggregated data.

If we can save 2+ API calls by hitting one API it keeps dashboards competitive.

Do love a “technically” :smile:

LXD 5.11 is currently available to snap users in the latest/candidate channel and will be rolled out to stable early next week. Clients have been pushed to both HomeBrew and Chocolatey and are currently going through their publishing process there.

The usual release live stream will be at 2pm US eastern time on Monday:

Regarding ZFS block mode, can it be enabled on an existing storage pool globally and used for new containers as default for root disk ?

I tried to add the volume.zfs.block_mode on my ZFS storage pool and when I want to create a container, I’ve got the following message :

lxc launch ubuntu/22.04 c1 -v
Creating c1
Error: Failed creating instance from image: Could not locate a zvol for zfsp1/containers/c1

The image used in the example is one of my custom image already present on the pool. When I looked on the ZFS pool, it seems an ZVOL ext4 image is present (corresponding to my custom image hash) :

zfs list
NAME                                                                                    USED  AVAIL     REFER  MOUNTPOINT
zfsp1                                                                                  1.08G   921G       96K  legacy
zfsp1/buckets                                                                            96K   921G       96K  legacy
zfsp1/containers                                                                        487M   921G       96K  legacy
zfsp1/containers/core_db                                                                263M  9.06G      571M  legacy
zfsp1/containers/core_ingress                                                           136M  9.18G      446M  legacy
zfsp1/containers/core_registry                                                         87.4M  9.23G      397M  legacy
zfsp1/custom                                                                             96K   921G       96K  legacy
zfsp1/deleted                                                                           328M   921G       96K  legacy
zfsp1/deleted/buckets                                                                    96K   921G       96K  legacy
zfsp1/deleted/containers                                                                 96K   921G       96K  legacy
zfsp1/deleted/custom                                                                     96K   921G       96K  legacy
zfsp1/deleted/images                                                                    328M   921G       96K  legacy
zfsp1/deleted/images/d153d54cc09a844c607b9f935365937b3c3fbb5188b301bb3d3524d5e8a42243   327M   921G      327M  legacy
zfsp1/deleted/virtual-machines                                                           96K   921G       96K  legacy
zfsp1/images                                                                            276M   921G       96K  legacy
zfsp1/images/d153d54cc09a844c607b9f935365937b3c3fbb5188b301bb3d3524d5e8a42243_ext4      276M   921G      276M  -
zfsp1/virtual-machines                                                                   96K   921G       96K  legacy

Did I miss something ? Thanks :slight_smile:

1 Like

@monstermunchkin please can you advise on this one? Thanks

I recreated my storage pool from scratch (through LXD) and it seemed to work as expected at first :

root@c1:~# cat /proc/mounts
/dev/zvol/zfsp1/containers/c1 / ext4 rw,relatime,idmapped,discard,stripe=2 0 0

I tried to enable/disable the option in the storage pool configuration and it worked as expected, I can have classic datasets or ZVOL for containers side-by-side.

So I dig a bit and the error showed up only when I launched a container without specifying the storage pool I want to use in the CLI :

$ lxc launch ubuntu/22.04 c1 -s zfsp1
Creating c1
Starting c1

$ lxc shell c1
root@c1:~# cat /proc/mounts
/dev/zvol/zfsp1/containers/c1 / ext4 rw,relatime,idmapped,discard,stripe=8 0 0
root@c1:~# df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/zvol/zfsp1/containers/c1  9.8G  533M  8.8G   6% /

$ lxc launch ubuntu/22.04 c2
Creating c2
Error: Failed creating instance from image: Could not locate a zvol for zfsp1/containers/c2

I usually rely on profile with the following configuration for containers :

  limits.kernel.nofile: "65535"
  limits.processes: "1024"
  security.devlxd: "false"
  security.idmap.isolated: "true"
  security.syscalls.intercept.sysinfo: "true"
description: Default profile for containers
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
    path: /
    pool: zfsp1
    size: 10GB
    type: disk
name: default
- /1.0/instances/c1

The culprit seems to be related to the size attribute of the root disk, once I delete it, I can launch the container without any problems. I tried to play a bit with and once I tried to re-enable the size/quota I’ve got this error :

Config parsing error: The following instances failed to update (profile change still saved):
 - Project: default, Instance: c2: Failed to update device "root": Failed to run: zfs set quota=none zfsp1/containers/c2: exit status 1 (cannot set property for 'zfsp1/containers/c2': 'quota' does not apply to datasets of this type)

I tried to change the quota on CLI, same error :

lxc config device override c2 root size=20GB
Error: Failed to update device "root": Failed to run: zfs set quota=none zfsp1/containers/c2: exit status 1 (cannot set property for 'zfsp1/containers/c2': 'quota' does not apply to datasets of this type)

Something seems to be borked with quota and ZVOL (don’t see why because it seems to work nicely with VM).

I’m taking a look at this now and have confirmed the issues:

lxc launch images:ubuntu/jammy c2 -s zfs -d root,size=12GiB
Creating c2
Error: Failed instance creation: Failed creating instance from image: Could not locate a zvol for zfs/containers/c2
lxc launch images:ubuntu/jammy c2 -s zfs 
Creating c2
Starting c2
lxc launch images:ubuntu/jammy c3 -s zfs -d root,size=12GiB
Creating c3
Error: Failed instance creation: Failed creating instance from image: Could not locate a zvol for zfs/containers/c3

lxc launch images:ubuntu/jammy c3 -s zfs -d root,size=10GiB
lxc config device set c3 root size=12GiB
Error: Failed to update device "root": Failed to run: zfs set quota=none zfs/containers/c3: exit status 1 (cannot set property for 'zfs/containers/c3': 'quota' does not apply to datasets of this type)

So looks like there is a general problem with sizing/resizing those types of volumes.

1 Like

Thanks @tomp ! Should I open an issue on GitHub about this issue ?

Yes please, over at https://github.com/lxc/lxd/issues thanks

This should fix it

1 Like

Done : https://github.com/lxc/lxd/issues/11396

Can confirm this works better now when using LXD 5.12 (CT creation using defined size but also resize afterwards) ! Thanks for your actions :slightly_smiling_face:

1 Like


I think that a documentation about how to use zvol is necessary.
If I create a new storage, with the block_mode parameters, I can launch a CT in zvol using this storage.

But if a create a volume with this parameters in an existing storage, how can I launch a CT in this volume ?


You can’t.

Currently the two options are:

  • Set the config key on the pool, then create instances and custom volumes with it enabled
  • Create custom volumes only with it set

Custom volumes cannot be used as instance storage, they’re just additional shared storage you attach to instances. And there’s no way to tell LXD to enable ZFS block mode just for the one instance you’re creating right now.

We have a plan to add config options on the disk device down the line to allow that, but it’s not there yet.

I understand.

But If I change a current pool to use zvol, how it affects actual dataset containers in this storage ? And new containers will be exclusively zvol ?


Update to documentation: I created an additional storage using an existent zfs pool/dataset and set the config for zvol to this.

lxc storage create zvol zfs source=rpool/zvol volume.zfs.block_mode=yes size="20GB" volume.block.filesystem="xfs" volume.block.mount_options="usrquota"

lxc launch images:debian/11/amd64 debian11zvol -s zvol

Best Regards

This is the only place I found this information in. I think it should be somewhere in the documentation. I planned to use a zvol for a single instance, but now I’m not even sure if it’s possible. Does it mean that we can set this config key, create an instance with zvol and unset the key, to have a single zvol instance? Or do we have to commit to having all future instances zvoled? How does it affect the existing instances? There are so many questions about this feature that is nowhere to be found. Would be great if someone could answer them. :slight_smile: