Weekly status #189

Weekly status for the week of the 8th of March to the 14th of March.


Currently in a mixed architecture cluster, spawning an image like ubuntu/20.04 will go to the least busy node, regardless of its architecture. This means that you may do one lxc launch and get x86_64, then do it again and get aarch64. While technically fine since you didn’t tell LXD what you want, it can still be a bit surprising. To address this a new global configuration and per-project config key has been added: images.default_architecture. This can be used to specify the default architecture to be used when not specifying a particular architecture image for instance creation.

Also related to cluster images, when using storage.image_volumes backed by a remote storage pool, LXD will now only download the image once rather than once for each member in the cluster.

In addition to this, when using the storage.image_volumes or storage.backups_volume settings, LXD will now unmount those volumes when LXD shuts down to avoid leaving the mounts it created when it started up.

On the CLI front, the Description field is now displayed as a column in the list command output for projects and storage pools.

OVN networking has also seen some improvements and bug fixes; firstly the #internal and #external subject classifiers for rules have been deprecated in place of @internal and @external. The reason for this is to provide a better user experience, as it was found that when editing the ACL rules in the YAML editor would cause the # to be treated as a comment unless quoted, which could cause these subjects to be accidentally removed from the rule set when editing the rules. We now are also creating the per-network OVN port group on LXD start (if needed) rather than just at network create time, so that LXD installs that already have OVN networks get the the new port group to avoid preventing instance start up failures.

Also related to preventing instance start up failures with OVN, we have worked around an issue in the ovn-nbctl command in some recent versions that was preventing adding the DNAT rules if they already existed (which was formerly ignored by the command if the --may-exist flag was provided). Instead we now delete any existing DNAT entry for an instance and recreate it.

An issue with the OVN network baseline rules was fixed to ensure that the arguments in the match are always parenthesised in case OVN adds additional criteria to the match rule that then breaks the rule’s logic.

On the storage front, an issue with the way instance backups were exporting snapshots has been fixed. Formerly the snapshots were being exported in alphabetical order. However when using an optimized export format, it is important to export them in age order (oldest first) to allow any storage driver specific differential logic to efficiency export only the differences between each snapshot. This causes inefficient exports when the snapshot naming scheme meant that newer snapshots came first alphabetically.

Also storage related, a bug in the BTRFS driver that meant that subvolumes in a BTRFS backed instance were not copied when performing a same-pool instance copy has been fixed.

On the clustering front, an issue that caused erroneous address configuration in the raft entries when turning a non-clustered node that already contains raft log entries into the first and leader node of a cluster through a bootstrap procedure. This erroneous configuration eventually was replicated to the other nodes and could lead to the bootstrapping node becoming unreachable from the other nodes.

Finally, an issue that was preventing LXD SNAP updates due to a hanging Qemu QMP connection to a VM has been worked around by adding a 5s to our QMP connections. This way of the Qemu QMP socket isn’t responding for some reason we don’t hang LXD commands.


This past week there have been some improvements and fixes for LXC’s automount logic.


A fix for the Plamo image to avoid overriding the PATH in installation scripts has been added.
An optimisation was added for Windows images to reduce the number of calls to hivexregedit which speeds up the build slightly.

Youtube channel

We’ve started a Youtube channel with live streams covering LXD releases and its use in the wider ecosystem.

You may want to give it a watch and/or subscribe for more content in the coming weeks.

Contribute to LXD

Ever wanted to contribute to LXD but not sure where to start?
We’ve recently gone through some effort to properly tag issues suitable for new contributors on Github: Easy issues for new contributors

Upcoming events

  • Nothing to report this week

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Distrobuilder Windows support
  • Virtual networks in LXD
  • Various kernel work
  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




  • Nothing to report this week


Dqlite (RAFT library)

  • Nothing to report this week

Dqlite (database)

  • Nothing to report this week

Dqlite (Go bindings)

  • Nothing to report this week

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Nothing to report this week


  • lxd: Cherry-picked upstream bugfixes
  • lxd-migrate: Create /var/log/lxd if missing on deb systems