Weekly status #172

Weekly status for the week of the 26th of October to the 1st of November.


This past week has been mostly around improving the resilience of existing functionality in both LXD and LXC. However LXD has seen two new features described below.


The main feature added to LXD in the past week for support for virtiofs for virtual machines. This provides similar functionality to the 9p support we already have (sharing a directory from the host into the VM guest), however because virtiofs is not a network filesystem it provides better performance and local filesystem semantics. This is now used automatically when the virtiofsd tool is available on the host (or inside the snap package) and we now expose disk device shares of directories as both 9p and virtiofs into the guest.

ZFS storage pools also gained a new feature that allows instance copies to be “rebased” on their associated source image volume rather than the the source instance being copied. This is achieved by setting the storage pool zfs.clone_copy setting to rebase (rather than the default value of true which causes instance copies to be cloned from the source instance’s volume). This simplifies the relationship between instance volumes and their source volumes by always having instance directly clone from the source image volume. However if there are many differences in the instance being copied compared to the original source image then the newly created instance will take up more space than if it had been cloned from the source instance itself.

There has also been an important change made when copying running instances that are backed by directory or ceph storage pools. Previously the copies were made without interfering with the running instance, however this resulted in inconsistent copies (because the running processes may be modifying the volume while it is being copied). Other storage engines support lightweight snapshots that allow us to create a temporary consistent snapshot to act as the source of the copy, however because dir and ceph storage pools don’t support this, we are now choosing to ensure that instance copies are consistent by freezing the instance for the duration of the copy. This will result in processes inside the instance appearing to ‘hang’ while the copy is taking place and then resume once the copy is complete.

Also related to ceph storage pools, an issue was reported that when a ceph volume was attached to a an instance that was part of a cluster and then the volume’s properties were modified from a different LXD cluster member than the member that was hosting the instance in certain circumstances (such as resizing the volume) this would cause the volume to be mounted temporarily on the local member. In scenarios where the ceph volume was already mounted on a different member and attached to a running instance this would cause mount conflicts and LXD operations hangs because ceph volumes are not allowed to be mounted on two members concurrently. We have fixed this issue now by detecting this scenario and redirecting the request to the cluster member that is hosting the instance that uses the volume.

When accessing LXD API via the local unix socket we now log the username of the process that is using the socket.

An issue that was causing the order of image lists to be partially randomised has now been fixed. This was manifesting itself in LXD downloading the same images repeatedly on LXD start up.

Also related to images, when several instances were being created concurrently that required downloading the same image, this could cause races where the same image would be downloaded concurrently and interfere with each other. This has now been fixed by adding a lock.

An issue that prevented an instance being recovered that was created, renamed and not subsequently started before having the LXD database removed has now been fixed by ensuring that the instance’s backup.yaml is written out to disk after a rename operation.


There have been two new features added to LXC in the past week.

Firstly, the lxc-attach command has gained a new flag --context which allows to switch into an SELinux context before attaching to the container.

Secondly, containers now have a new config option lxc.cgroup.dir.monitor.pivot which causes the PID of the monitor process to be attached to this cgroup on container termination.

Youtube channel

We’ve started a Youtube channel with a couple of live streams so far.
You may want to give it a watch and/or subscribe for more content in the coming weeks.

Contribute to LXD

Ever wanted to contribute to LXD but not sure where to start?
We’ve recently gone through some effort to properly tag issues suitable for new contributors on Github: Easy issues for new contributors

Upcoming events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Distrobuilder Windows support
  • Virtual networks in LXD
  • Various kernel work
  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




  • Nothing to report this week


Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Nothing to report this week


  • lxd: Cherry-picked upstream bugfixes