Weekly status #192

Weekly status for the week of the 29th of March to the 4th of April.


This past week saw several new features added to the lxc client tool for LXD, and improved support for GPU pass-through into containers, as well as a continued focus on hardening liblxc using “fuzzing” testing.

The LXD team is hiring

Canonical Ltd. is expanding its investment into LXD with a total of 5 additional roles.
The primary focus of this effort is around scalability and clustering as well as developing compelling solutions using LXD for our customers.

All LXD positions are 100% remote with some travel for internal events and conferences.


LXD has gained improved filtering when listing instances with the lxc ls command.

It now supports the following new filtering arguments:

  • type={instance type}
  • status={instance current lifecycle status}
  • architecture={instance architecture}
  • location={location name}
  • ipv4={ip or CIDR}
  • ipv6={ip or CIDR}

Multiple values (using OR) must be delimited by ‘,’.

LXD’s lxc client tool has also gained support for a concept called “system wide remotes”, where remote LXD nodes can be defined globally on a system in /etc/lxc/config.yml , rather than being stored in a per-user config file. For more info on this please see https://linuxcontainers.org/lxd/docs/master/remotes.

We have also improved our support for passing different types of GPUs into containers. LXD now supports passing in an NVIDIA MIG device, as well as an mdev GPU that exists on top of an SR-IOV device.

There were also the following bug fixes:

  • Improved the ability for LXD to maintain the original instance’s root disk config when importing or recovering an instance. This was achieved by considering the new profile’s config and comparing it to expanded profile config from the backup. If they were the same (excluding the storage pool), then we don’t add a default root disk device to the instance. Which also means that where possible, additional config from the profiles (such as the size property) will be maintained where it wasn’t before.
  • Some storage pools (such as cephfs) can only be currently used for custom volumes and do not support instances. However it was previously possible to try and create an instance on such a storage pool, and this would fail and leave orphaned database records. This has now been fixed and the error message clarified.
  • The output of lxc network acl show can now be used as the piped input to lxc network acl edit, making it easier to copy the config from one ACL to another.


There has been a continued focus on hardening the code to fix issues that have been revealed by the use of “fuzzing” testing.

Youtube channel

We’ve started a Youtube channel with live streams covering LXD releases and its use in the wider ecosystem.

You may want to give it a watch and/or subscribe for more content in the coming weeks.

Contribute to LXD

Ever wanted to contribute to LXD but not sure where to start?
We’ve recently gone through some effort to properly tag issues suitable for new contributors on Github: Easy issues for new contributors

Upcoming events

  • Nothing to report this week

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Distrobuilder Windows support
  • Virtual networks in LXD
  • Various kernel work
  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




  • Nothing to report this week


  • Nothing to report this week

Dqlite (RAFT library)

  • Nothing to report this week

Dqlite (database)

  • Nothing to report this week

Dqlite (Go bindings)

  • Nothing to report this week

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Nothing to report this week


  • Nothing to report this week
1 Like