Weekly status #136


Weekly status for the week of the 17th February to the 23rd of February.

Introduction

The past week has seen improvements in LXD across several areas:

Firewall
Support for using the nftables firewall was added. This is used when using bridged network devices in order to perform NAT, allow access to DHCP/DNS and to apply IP filtering rules. Previously LXD only supported the xtables firewall (comprised of the iptables, ip6tables and ebtables tools) and this was causing issues when running LXD on hosts that only had nftables available (even though nftables provides equivalent commands for the xtables tools, these were not exactly matching the original tools behaviour in all cases).

Now when LXD starts it tries to determine which firewall driver to use based on how the host system is currently configured. It uses the following logic to pick a driver:

  1. Nftables available and has a non-empty ruleset. Use nftables.
  2. Nftables isn’t available. Use xtables.
  3. Both nftables and xtables are available, but xtables has a non-empty ruleset. Use xtables.
  4. Both nftables and xtables are available, but neither are in use. Use nftables.

Storage
Before the new storage framework was added, the old LVM driver had a bug that allowed an existing non-empty volume group to be used for backing a LXD storage pool when using LVM thinpools. This bug was fixed when the new storage framework was added, however it became apparent that some users were relying on this behaviour. So we have added a new storage pool config option (lvm.vg.force_reuse) that can be used at storage pool create time to disable the volume group in use checks.

lxc storage create lvmpool lvm source=existing_vg lvm.vg.force_reuse=true

As well as allowing an existing non-empty pool to be used, it will also act as a marker for support purposes indicating that this approach has been used. Where possible we would discourage the use of non-empty volume groups for storage pools to avoid any unexpected issues arising from volume naming collisions between LXD and any other users of the volume groups.

Last week LXD 3.21 was released with the new ceph storage driver. This past week seen several bug fixes added for that new storage driver including setting a default ceph username if missing.

A bug was also fixed to handle relative symlinks when LXD is running inside the snap package. This allows pushing a file that is relative symlink to another file into an instance.

Virtual Machines
On the virtual machine front, we are now using the -sandbox option provided by Qemu to further restrict what syscalls the VM processes can use on the host.

There has also been a fix added to allow external disk images to be added to VMs when using the Snap package.

Projects
There have been a raft of fixes relating to project support in the past week. Firstly when using lxd import one can now use the --project flag to specify which project the instance being imported was in. Previously an instance in a project could not be imported using lxd import. Several fixes have been added to improve the safety of using lxd import, so that if it fails for some reason, the fact that an import is taking place is detected by the storage layer and the instance on disk will not be removed during the revert steps. Furthermore support for auto updating images that were created in a project has been added, and support for rotating log files for instances in projects was also added.

Database
There has been some internal database table restructure with the creation of a new table storage_volumes_snapshots which uses foreign keys to ensure that all snapshots have an existing parent volume and cannot become orphaned.

The database query timeout has been increased from 5s to 30s to help queries complete on slow or heavily loaded systems.

On the LXCFS front work has continued on adding cgroupv2 support.

Distrobuilder has seen a lot of activity in the past week with initial VM image building support added. See @stgraber’s post on that for more details.

Contribute to LXD

Ever wanted to contribute to LXD but not sure where to start?
We’ve recently gone through some effort to properly tag issues suitable for new contributors on Github: https://github.com/lxc/lxd/labels/Easy

You can also find a slightly longer, more detailed list here: Contributing to LXD

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Virtual machine support
  • Distrobuilder virtual machine support
  • Storage database cleanup/rework
  • Various kernel work
  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD

LXC

  • Nothing to report this week

LXCFS

Distrobuilder

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu

  • Nothing to report this week

Snap

  • Nothing to report this week