Weekly status #283

Weekly status for the week of 23rd January to 29th January.


This past week saw the addition of one of our LXD roadmap items, the Instance placement scriptlet feature, as well as the usual collection of improvements and bug fixes. See below for more information.

@stgraber has also added a video covering our the LXD LTS release process, as well as covering the items in our latest LXD 5.0.2 release:

Job openings

Canonical Ltd. strengthens its investment into LXD and is looking at building multiple squads under the technical leadership of @stgraber.

As such, we are looking for first line managers (highly technical) and individual contributors to grow the team and pursue our efforts around scalability and clustering.

All positions are 100% remote with some travel for internal events and conferences.

For more info please see LXD related openings at Canonical Ltd (2022-2023)


New features:

  • Added Instance placement scriptlet support. This new feature allows for a custom script to control which cluster member is used to host a particular instance. The script is executed when LXD needs to know where to place an instance and its provided information about the cluster members which can then be used to implement custom domain logic to control how LXD places instances. See the Documentation and the Specification for more information.


  • Added location field to InstanceCreated and StorageVolumeCreated lifecycle events which indicates which cluster member the new entity will exist on. For remote storage volumes (such as ceph) the location field will not be populated.
  • Documented the lxc cluster info command.

Bug fixes:

  • Fixed an issue with multicast and macvlan NIC type when using VMs. This was causing IPv6 NDP to not work properly (because it relies on multicast). Although SLAAC IPv6 addresses were working, this turned out to be coincidental because the host-side macvlan interface had a SLAAC derived link-local address (which allowed multicast to work for that group). To fix the issue more generally the allmulticast flag is now enabled on the host-side interface. Additionally IPv6 is disabled entirely on the host-side interface to avoid it getting an unnecessary link-local address.
  • Fixed disk device raw.mount.options setting. Previously this was only supporting mount options (such as uid and gid) and not mount flags (such as ro and nosuid).

LXD Charm


  • Added Auto-update Charm Libraries workflow.

YouTube videos

The LXD team is running a YouTube channel with live streams covering LXD releases and weekly videos on different aspects of LXD. You may want to give it a watch and/or subscribe for more content in the coming weeks.


Contribute to LXD

Ever wanted to contribute to LXD but not sure where to start?
We’ve recently gone through some effort to properly tag issues suitable for new contributors on Github: Easy issues for new contributors

Upcoming events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




  • Nothing to report this week


  • Nothing to report this week

Dqlite (RAFT library)

  • Nothing to report this week

LXD Charm

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Nothing to report this week


  • Nothing to report this week
1 Like

IIRC, years ago, we (@NorthSec) concluded that macvlan wasn’t a good choice for IPv6’s multicast. I’m really happy to see that it was potentially just a configuration issue, thanks for figuring it out and fixing it!

1 Like

Thanks! Was this with VMs or containers?

I really don’t remember which it was. @stgraber you probably have a better recollection of this.

Pretty sure it was containers. VMs can’t use macvlan because our physical NICs need LACP support, so would need macvlan on top of a bond interface rather than physical NIC which from what I recall wasn’t a good experience.

Current setup where we use VLAN filtering on a bridge which includes the bond works pretty well.

Ah, so whatever our issue with macvlan was, it’s probably not what Tom fixed :confused: