Weekly status #305

Weekly status for the week of 26th June to 2nd July.


This past week has primarily been focused on storage related improvements and fixes. We landed one new feature from our roadmap too.

Future weekly status posts concerning the LXD project will be made on the Ubuntu Discourse.

Job openings

Canonical Ltd. strengthens its investment into LXD and is looking at building multiple squads under the technical leadership of @stgraber.

As such, we are looking for first line managers (highly technical) and individual contributors to grow the team and pursue our efforts around scalability and clustering.

All positions are 100% remote with some travel for internal events and conferences.

For more info please see LXD related openings at Canonical Ltd (2022-2023)


New features:


  • The VM vsock ID is now randomly picked using a stable random generator seeded from the instance’s UUID. This allows for LXD to be run inside multiple containers on the same host and then launch VMs inside them and their vsock IDs will not conflict. This also makes LXD coexist better with other users of vsock on a server. This also fixes an issue in LXD 5.15 where if a VM had been created and then subsequently the system no longer had /dev/kvm support, then LXD would crash if a VM was started.

Bug fixes:

  • Fixed an issue with ZFS image volumes incorrect having a filesystem suffix in their name when using zfs.block_mode.
  • Fixed an issue with bucket storage volumes not having the project name in their name.
  • Switched to using power of 2 units for memory and storage references/defaults.
  • Fixed issues with instance rebuild feature so that running instances are restarted after being rebuilt, and don’t fail when image specified isn’t downloaded yet.
  • Fixed issue that prevented deleting multiple instance snapshots with lxc delete.
  • Added missing --target flag to lxc storage bucket key list command.
  • Fixed BTRFS snapshot race conditions on busy containers.
  • Fixed an issue where the container scheduler was not applying CPU pinning applied when a server was joined to a cluster until the next time LXD was restarted. This was caused because the container scheduler was using a long-lived state record that did not reflect the server’s new name in the cluster, and subsequently it did not find any instances that existed on this member until it was using the correct name.


Bug fixes:

  • Fixed masking units created by generator.

YouTube videos

The LXD team is running a YouTube channel with live streams covering LXD releases and weekly videos on different aspects of LXD. You may want to give it a watch and/or subscribe for more content in the coming weeks.


Contribute to LXD

Ever wanted to contribute to LXD but not sure where to start?
We’ve recently gone through some effort to properly tag issues suitable for new contributors on Github: Easy issues for new contributors

Upcoming events

  • Nothing to report this week

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.



  • Nothing to report this week


  • Nothing to report this week


LXD Charm

  • Nothing to report this week

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Nothing to report this week