Weekly status #128

Weekly status for the week of the 9th December to the 15th of December.


This past week’s LXD focus was still on the initial VM feature and its associated storage layer re-work.

It was also a deadline for a number of contributions coming from students at the University of Texas in Austin. Many of those contributions are still under work/review but some got merged this week. This includes laying the ground work required for LXD to use nftables firewall, support for abstracting cgroup v1/v2 and a new way to track external processes (such as dnsmasq and forkprox).

On the VM front two security improvements have been added; firstly the VM processes on the host are now chrooted to their own directory, and secondly they are run as a non-root user. We have also ensured that the 9p share inside the VM that is used by the lxd-agent is mounted in such a way that prevents non-root users in the VM from accessing the share files (as it contains a private TLS key identifying the VM that will be used when communicating with the host).

On the storage front, the directory and cephfs drivers are now finished, the btrfs driver is undergoing review and we are continuing to work on porting the other storage drivers to the new framework.

On the LXC front, works has continued on refactoring the cgroup management functionality as we work towards cgroup2 support.

Contribute to LXD

Ever wanted to contribute to LXD but not sure where to start?
We’ve recently gone through some effort to properly tag issues suitable for new contributors on Github: https://github.com/lxc/lxd/labels/Easy

You can also find a slightly longer, more detailed list here: Contributing to LXD

FOSDEM 2020 - containers devroom

We will once again be running the containers devroom at the upcoming FOSDEM conference in Brussels, Belgium. This year it’s going to be over the weekend of the 1st and 2nd of February.

The detailed call for papers can be found here: FOSDEM 2020 containers devroom: Call for papers

Upcoming events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Virtual machine support
  • Rework of internal LXD storage handling
  • Distrobuilder virtual machine support
  • Various kernel work
  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




  • Nothing to report this week


  • Nothing to report this week

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Nothing to report this week


  • Nothing to report this week

When the next version of LXD with nictype=routed will be released?

Hi, this will be in LXD 3.19, its already merged and in the edge snap if you want to try it early (although the edge has schema changes so don’t roll it out to a production system yet).

We were initially hoping to release LXD 3.19 by the end of the year, but given most of the LXD team will be off for the next two weeks and 3.19 has a LOT of changes, we figured waiting until early next year when we’re all around to handle any upgrade issues is probably a better bet.

If you have a test system you want to try things on, the edge channel is a good way to see what’s coming. Do note however that given we have done a number of database changes, you will not be able to move from edge back to stable until the day we release 3.19, so using a separate system is recommended.

This is actually what I was talking about when requesting 1.18.1.
Newer release has lots of changes and hopefully will not break older functionality.

I would like to suggest LizardFS would make a useful shared storage driver. It’s like Ceph but much easier to setup for the layman. (Like me!) You can literally run LizardFS on anything, in containers, (LXD), on R-Pi, …

I’m thinking it could be good choice for small scale live migration situations.

If I had the code ability I would like to look into this myself if it were feasible but I’m a bit of a newby, only cobbled together bits of python before and not delved into GoLang. My day job is Networking and Infra so I’m not a natural coder!.

For the record, OpenNebula has the ability to use LizardFS as a datastore backend for VM’s so I think it should be possible.