LXD 3.15 has been released

Introduction

The LXD team is very excited to announce the release of LXD 3.15!

This release both includes a number of major new features as well as some significant internal rework of various parts of LXD.

One big highlight is the transition to the dqlite 1.0 branch which will bring us more performance and reliability, both for our cluster users and for standalone installations. This rework moves a lot of the low-level database/replication logic to dedicated C libraries and significantly reduces the amount of back and forth going on between C and Go.

On the networking front, this release features a lot of improvements, adding support for IPv4/IPv6 filtering on bridges, MAC and VLAN filtering on SR-IOV devices and much improved DHCP server management.

We’re also debuting a new version of our resources API which will now provide details on network devices and storage disks on top of extending our existing CPU, memory and GPU reporting.

And that’s all before looking into the many other performance improvements, smaller features and bugfixes that went into this release.

For our Windows users, this is also the first LXD release to be available through the Chocolatey package manager: choco install lxc

Enjoy!

Major improvements

Switch to dqlite 1.0

After over a year of running all LXD servers on the original implementation of our distributed sqlite database, it’s finally time for LXD to switch to its 1.0 branch.

This doesn’t come with any immediately noticeable improvements for the user, but reduces the number of external dependencies, CPU usage and memory usage for the database. It will also make it significantly easier for us to debug issues and better integrate with more complex database operations when running clusters.

Upon upgrading to LXD 3.15, the on-disk database format will change, getting automatically converted following an automated backup. For cluster users, the protocol used for database queries between cluster nodes is also changing, which will cause all cluster nodes to refresh at the same time so they all transition to the new database.

Reworked DHCP lease handling

In the past, LXD’s handling of DHCP was pretty limited. We would write static lease entries to the configuration and then kick dnsmasq to read it. For changes and deletions of static leases, we’d need to completely restart the dnsmasq process which was rather costly.

LXD 3.15 changes that by instead having LXD itself issue DHCP requests to the dnsmasq server based on what’s currently in the DHCP lease table. This can be used to manually release a lease when a container’s configuration is altered or a container is deleted, all without ever needing to restart dnsmasq.

Reworked cluster heartbeat handling

In the past, the cluster leader would send a message to all cluster members on a 10s cadence, spreading those heartbeats over time. The heatbeat data itself was just the list of database nodes so that all cluster members would know where to send database queries.

Separately from that mechanism, we then had background tasks on all cluster members which would periodically look for version mismatches between members to detect pending updates and another task to detect changes in the list of members or in their IP addresses to re-configure clustered DNS.

For large size clusters, those repetitive tasks ended up being rather costly and also un-needed.

LXD 3.15 now extends this internal heartbeat to include the most recent version information from the cluster as well as the status of all cluster members, not just the database ones. This means that only the cluster leader needs to retrieve that data and all other members will now have a consistent view of everything within 10s rather than potentially several minutes (as was the case for the update check).

Better syscall interception framework

Quite a bit of work has gone into the syscall interception feature of LXD. Currently this covers mknod and mknodat for systems that run a 5.0+ kernel along with a git snapshot of both liblxc and libseccomp.

The changes involve a switch of API with liblxc ahead of the LXC 3.2 release as well as fixing handling of shiftfs backed containers and cleaning common logic to make it easier to intercept additional syscalls in the near future.

More reliable unix socket proxying

A hard to track down bug in the proxy device code was resolved which will now properly handle unix socket forwarding. This was related to end of connection detection and forwarding of the disconnection event.

Users of the proxy device for X11 and/or pulseaudio may in the past have noticed windows that won’t close on exit or the sudden inability to start new software using that unix socket. This has now been resolved and so should make the life of those running graphical applications in LXD much easier.

New features

Hardware VLAN and MAC filtering on SR-IOV

The security.mac_filtering and vlan properties are now available to SR-IOV devices. This directly controls the matching SR-IOV options on the virtual function and so will completely prevent any MAC spoofing from the container or in the case of VLANs will perform hardware filtering at the VF level.

root@athos:~# lxc init ubuntu:18.04 c1
Creating c1
root@athos:~# lxc config device add c1 eth0 nic nictype=sriov parent=eth0 vlan=1015 security.mac_filtering=true
Device eth0 added to c1
root@athos:~# lxc start c1
root@athos:~# lxc list c1
+------+---------+------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  | IPV4 |                     IPV6                      |    TYPE    | SNAPSHOTS |
+------+---------+------+-----------------------------------------------+------------+-----------+
| c1   | RUNNING |      | 2001:470:b0f8:1015:7010:a0ff:feca:e7e1 (eth0) | PERSISTENT | 0         |
+------+---------+------+-----------------------------------------------+------------+-----------+

New storage-size option for lxd-p2c

A new --storage-size option has been added which when used together with --storage allows specifying the desired volume size to use for the container.

root@mosaic:~# ./lxd-p2c 10.166.11.1 p2c / --storage btrfs --storage-size 10GB
Generating a temporary client certificate. This may take a minute...
Certificate fingerprint: fd200419b271f1dc2a5591b693cc5774b7f234e1ff8c6b78ad703b6888fe2b69
ok (y/n)? y
Admin password for https://10.166.11.1:8443: 
Container p2c successfully created                

stgraber@castiana:~/data/code/go/src/github.com/lxc/lxd (lxc/master)$ lxc config show p2c
architecture: x86_64
config:
  volatile.apply_template: copy
  volatile.eth0.hwaddr: 00:16:3e:12:39:c8
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
devices:
  root:
    path: /
    pool: btrfs
    size: 10GB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Ceph FS storage backend for custom volumes

Ceph FS was added as a storage driver for LXD. Support is limited to custom storage volumes though, containers will not be allowed on Ceph FS and it’s indeed recommended to use Ceph RBD for them.

Ceph FS support includes size restrictions (quota) and native snapshot supports when the server, server configuration and client kernel support those features.

This is a perfect match for users of LXD clustering with Ceph as Ceph FS will allow you to attach the same custom volume to multiple containers at the same time, even if they’re located on different hosts (which isn’t the case for RBD).

stgraber@castiana:~$ lxc storage create test cephfs source=persist-cephfs/castiana
Storage pool test created
stgraber@castiana:~$ lxc storage volume create test my-volume
Storage volume my-volume created
stgraber@castiana:~$ lxc storage volume attach test my-volume c1 data /data

stgraber@castiana:~$ lxc exec c1 -- df -h
Filesystem                                               Size  Used Avail Use% Mounted on
/var/lib/lxd/storage-pools/default/containers/c1/rootfs  142G  420M  141G   1% /
none                                                     492K  4.0K  488K   1% /dev
udev                                                     7.7G     0  7.7G   0% /dev/tty
tmpfs                                                    100K     0  100K   0% /dev/lxd
tmpfs                                                    100K     0  100K   0% /dev/.lxd-mounts
tmpfs                                                    7.8G     0  7.8G   0% /dev/shm
tmpfs                                                    7.8G  156K  7.8G   1% /run
tmpfs                                                    5.0M     0  5.0M   0% /run/lock
tmpfs                                                    7.8G     0  7.8G   0% /sys/fs/cgroup
[2001:470:b0f8:1015:5054:ff:fe5e:ea44]:6789:/castiana     47G     0   47G   0% /data

IPv4 and IPv6 filtering (spoof protection)

One frequently requested feature is to extend our spoofing protection beyond just MAC spoofing, doing proper IPv4 and IPv6 filtering too.

This effectively allows multiple containers to share the same underlying bridge without having concerns about root in one of those containers being able to spoof the address of another, hijacking traffic or causing connectivity issues.

To prevent a container from being able to spoof the MAC or IP of any other container, you can now set the following properties on the nic device:

  • security.mac_filtering=true
  • security.ipv4_filtering=true
  • security.ipv6_filtering=true

NOTE: Setting those will prevent any internal bridging/nesting inside that container as those rely on multiple MAC addresses being used for a single container.

stgraber@castiana:~$ lxc config device add c1 eth0 nic nictype=bridged name=eth0 parent=lxdbr0 security.mac_filtering=true security.ipv4_filtering=true security.ipv6_filtering=true
Device eth0 added to c1
stgraber@castiana:~$ lxc start c1
stgraber@castiana:~$ lxc list c1
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.166.11.178 (eth0) | 2001:470:b368:4242:216:3eff:fefa:e5f8 (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+----------------------------------------------+------------+-----------+

Reworked resources API (host hardware)

The resources API (/1.0/resources) has seen a lot of improvements as well as a re-design of the existing bits. Some of the changes include:

  • CPU
    • Improved reporting of NUMA nodes (now per-core)
    • Improved reporting of frequencies (minimum, current and turbo)
    • Added cache information reporting
    • Added full core/thread topology
    • Added ID (to use for pinning)
    • Added architecture name
  • Memory
    • Added NUMA node reporting
    • Added hugepages tracking
  • GPU
    • Added sub-section for DRM information
    • Now detecting cards which aren’t bound to a DRM driver
    • Support for GPU SR-IOV reporting
  • NIC
    • Added reporting of ethernet & infiniband cards
    • Support for SR-IOV
    • Per-port link information
  • Disks
    • Added support for disk reporting
    • Bus type reporting
    • Partition list
    • Disk identifiers (vendor, WWN, …)

The lxc info --resources command was updated to match.

NOTE: This version of the resources API isn’t compatible with the previous one. The data structures had to change to properly handle more complex CPU topologies (like AMD Epyc) and couldn’t be done in a properly backward compatible way. As a result, the command line client will detect the resources_v2 API and fail for servers which do not support it.

Lengthy example output
root@athos:~# lxc info --resources
CPUs (x86_64):
  Socket 0:
    Vendor: GenuineIntel
    Name: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Caches:
      - Level 1 (type: Data): 33kB
      - Level 1 (type: Instruction): 33kB
      - Level 2 (type: Unified): 262kB
      - Level 3 (type: Unified): 31MB
    Cores:
      - Core 0
        Frequency: 2814Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 0, online: true)
          - 1 (id: 24, online: true)
      - Core 1
        Frequency: 2800Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 1, online: true)
          - 1 (id: 25, online: true)
      - Core 2
        Frequency: 2652Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 2, online: true)
          - 1 (id: 26, online: true)
      - Core 3
        Frequency: 2840Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 27, online: true)
          - 1 (id: 3, online: true)
      - Core 4
        Frequency: 2613Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 28, online: true)
          - 1 (id: 4, online: true)
      - Core 5
        Frequency: 2811Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 29, online: true)
          - 1 (id: 5, online: true)
      - Core 8
        Frequency: 2710Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 30, online: true)
          - 1 (id: 6, online: true)
      - Core 9
        Frequency: 2807Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 31, online: true)
          - 1 (id: 7, online: true)
      - Core 10
        Frequency: 2805Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 32, online: true)
          - 1 (id: 8, online: true)
      - Core 11
        Frequency: 2874Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 33, online: true)
          - 1 (id: 9, online: true)
      - Core 12
        Frequency: 2936Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 10, online: true)
          - 1 (id: 34, online: true)
      - Core 13
        Frequency: 2819Mhz
        NUMA node: 0
        Threads:
          - 0 (id: 11, online: true)
          - 1 (id: 35, online: true)
    Frequency: 2790Mhz (min: 1200Mhz, max: 3200Mhz)
  Socket 1:
    Vendor: GenuineIntel
    Name: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Caches:
      - Level 1 (type: Data): 33kB
      - Level 1 (type: Instruction): 33kB
      - Level 2 (type: Unified): 262kB
      - Level 3 (type: Unified): 31MB
    Cores:
      - Core 0
        Frequency: 1762Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 12, online: true)
          - 1 (id: 36, online: true)
      - Core 1
        Frequency: 2440Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 13, online: true)
          - 1 (id: 37, online: true)
      - Core 2
        Frequency: 1845Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 14, online: true)
          - 1 (id: 38, online: true)
      - Core 3
        Frequency: 2899Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 15, online: true)
          - 1 (id: 39, online: true)
      - Core 4
        Frequency: 2727Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 16, online: true)
          - 1 (id: 40, online: true)
      - Core 5
        Frequency: 2345Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 17, online: true)
          - 1 (id: 41, online: true)
      - Core 8
        Frequency: 1931Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 18, online: true)
          - 1 (id: 42, online: true)
      - Core 9
        Frequency: 1959Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 19, online: true)
          - 1 (id: 43, online: true)
      - Core 10
        Frequency: 2137Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 20, online: true)
          - 1 (id: 44, online: true)
      - Core 11
        Frequency: 3065Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 21, online: true)
          - 1 (id: 45, online: true)
      - Core 12
        Frequency: 2603Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 22, online: true)
          - 1 (id: 46, online: true)
      - Core 13
        Frequency: 2543Mhz
        NUMA node: 1
        Threads:
          - 0 (id: 23, online: true)
          - 1 (id: 47, online: true)
    Frequency: 2354Mhz (min: 1200Mhz, max: 3200Mhz)

Memory:
  Hugepages:
    Free: 0B
    Used: 171.80GB
    Total: 171.80GB
  NUMA nodes:
    Node 0:
      Hugepages:
        Free: 0B
        Used: 85.90GB
        Total: 85.90GB
      Free: 119.93GB
      Used: 150.59GB
      Total: 270.52GB
    Node 1:
      Hugepages:
        Free: 0B
        Used: 85.90GB
        Total: 85.90GB
      Free: 127.28GB
      Used: 143.30GB
      Total: 270.58GB
  Free: 250.14GB
  Used: 290.96GB
  Total: 541.10GB

GPUs:
  Card 0:
    NUMA node: 0
    Vendor: Matrox Electronics Systems Ltd. (102b)
    Product: MGA G200eW WPCM450 (0532)
    PCI address: 0000:08:03.0
    Driver: mgag200 (5.0.0-20-generic)
    DRM:
      ID: 0
      Card: card0 (226:0)
      Control: controlD64 (226:0)
  Card 1:
    NUMA node: 1
    Vendor: NVIDIA Corporation (10de)
    Product: GK208B [GeForce GT 730] (1287)
    PCI address: 0000:82:00.0
    Driver: vfio-pci (0.2)
  Card 2:
    NUMA node: 1
    Vendor: NVIDIA Corporation (10de)
    Product: GK208B [GeForce GT 730] (1287)
    PCI address: 0000:83:00.0
    Driver: vfio-pci (0.2)

NICs:
  Card 0:
    NUMA node: 0
    Vendor: Intel Corporation (8086)
    Product: I350 Gigabit Network Connection (1521)
    PCI address: 0000:02:00.0
    Driver: igb (5.4.0-k)
    Ports:
      - Port 0 (ethernet)
        ID: eth0
        Address: 00:25:90:ef:ff:31
        Supported modes: 10baseT/Half, 10baseT/Full, 100baseT/Half, 100baseT/Full, 1000baseT/Full
        Supported ports: twisted pair
        Port type: twisted pair
        Transceiver type: internal
        Auto negotiation: true
        Link detected: true
        Link speed: 1000Mbit/s (full duplex)
    SR-IOV information:
      Current number of VFs: 7
      Maximum number of VFs: 7
      VFs: 7
      - NUMA node: 0
        Vendor: Intel Corporation (8086)
        Product: I350 Ethernet Controller Virtual Function (1520)
        PCI address: 0000:02:10.0
        Driver: igbvf (2.4.0-k)
        Ports:
          - Port 0 (ethernet)
            ID: enp2s16
            Address: 72:10:a0:ca:e7:e1
            Auto negotiation: false
            Link detected: false
      - NUMA node: 0
        Vendor: Intel Corporation (8086)
        Product: I350 Ethernet Controller Virtual Function (1520)
        PCI address: 0000:02:10.4
        Driver: igbvf (2.4.0-k)
        Ports:
          - Port 0 (ethernet)
            ID: enp2s16f4
            Address: 3e:fa:1d:b2:17:5e
            Auto negotiation: false
            Link detected: false
      - NUMA node: 0
        Vendor: Intel Corporation (8086)
        Product: I350 Ethernet Controller Virtual Function (1520)
        PCI address: 0000:02:11.0
        Driver: igbvf (2.4.0-k)
        Ports:
          - Port 0 (ethernet)
            ID: enp2s17
            Address: 36:33:bf:74:89:8e
            Auto negotiation: false
            Link detected: false
      - NUMA node: 0
        Vendor: Intel Corporation (8086)
        Product: I350 Ethernet Controller Virtual Function (1520)
        PCI address: 0000:02:11.4
        Driver: igbvf (2.4.0-k)
        Ports:
          - Port 0 (ethernet)
            ID: enp2s17f4
            Address: 86:a4:f0:b5:2f:e1
            Auto negotiation: false
            Link detected: false
      - NUMA node: 0
        Vendor: Intel Corporation (8086)
        Product: I350 Ethernet Controller Virtual Function (1520)
        PCI address: 0000:02:12.0
        Driver: igbvf (2.4.0-k)
        Ports:
          - Port 0 (ethernet)
            ID: enp2s18
            Address: 56:0a:5a:0c:e7:ff
            Auto negotiation: false
            Link detected: false
      - NUMA node: 0
        Vendor: Intel Corporation (8086)
        Product: I350 Ethernet Controller Virtual Function (1520)
        PCI address: 0000:02:12.4
        Driver: igbvf (2.4.0-k)
        Ports:
          - Port 0 (ethernet)
            ID: enp2s18f4
            Address: 0a:a9:b3:21:13:8c
            Auto negotiation: false
            Link detected: false
      - NUMA node: 0
        Vendor: Intel Corporation (8086)
        Product: I350 Ethernet Controller Virtual Function (1520)
        PCI address: 0000:02:13.0
        Driver: igbvf (2.4.0-k)
        Ports:
          - Port 0 (ethernet)
            ID: enp2s19
            Address: ae:1a:db:06:8a:51
            Auto negotiation: false
            Link detected: false
  Card 1:
    NUMA node: 0
    Vendor: Intel Corporation (8086)
    Product: I350 Gigabit Network Connection (1521)
    PCI address: 0000:02:00.1
    Driver: igb (5.4.0-k)
    Ports:
      - Port 0 (ethernet)
        ID: eth1
        Address: 00:25:90:ef:ff:31
        Supported modes: 10baseT/Half, 10baseT/Full, 100baseT/Half, 100baseT/Full, 1000baseT/Full
        Supported ports: twisted pair
        Port type: twisted pair
        Transceiver type: internal
        Auto negotiation: true
        Link detected: true
        Link speed: 1000Mbit/s (full duplex)
    SR-IOV information:
      Current number of VFs: 0
      Maximum number of VFs: 7

Disks:
  Disk 0:
    NUMA node: 0
    ID: nvme0n1
    Device: 259:0
    Model: INTEL SSDPEKNW020T8
    Type: nvme
    Size: 2.05TB
    WWN: eui.0000000001000000e4d25c8b7c705001
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: nvme0n1p1
        Device: 259:1
        Read-Only: false
        Size: 52.43MB
      - Partition 2
        ID: nvme0n1p2
        Device: 259:2
        Read-Only: false
        Size: 26.84GB
      - Partition 3
        ID: nvme0n1p3
        Device: 259:3
        Read-Only: false
        Size: 8.59GB
      - Partition 4
        ID: nvme0n1p4
        Device: 259:4
        Read-Only: false
        Size: 53.69GB
      - Partition 5
        ID: nvme0n1p5
        Device: 259:5
        Read-Only: false
        Size: 1.96TB
  Disk 1:
    NUMA node: 0
    ID: nvme1n1
    Device: 259:6
    Model: INTEL SSDPEKNW020T8
    Type: nvme
    Size: 2.05TB
    WWN: eui.0000000001000000e4d25cca7c705001
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: nvme1n1p1
        Device: 259:7
        Read-Only: false
        Size: 52.43MB
      - Partition 2
        ID: nvme1n1p2
        Device: 259:8
        Read-Only: false
        Size: 26.84GB
      - Partition 3
        ID: nvme1n1p3
        Device: 259:9
        Read-Only: false
        Size: 8.59GB
      - Partition 4
        ID: nvme1n1p4
        Device: 259:10
        Read-Only: false
        Size: 53.69GB
      - Partition 5
        ID: nvme1n1p5
        Device: 259:11
        Read-Only: false
        Size: 1.96TB
  Disk 2:
    NUMA node: 0
    ID: sda
    Device: 8:0
    Model: WDC WD60EFRX-68M
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sda1
        Device: 8:1
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sda9
        Device: 8:9
        Read-Only: false
        Size: 8.39MB
  Disk 3:
    NUMA node: 0
    ID: sdb
    Device: 8:16
    Model: WDC WD60EFRX-68M
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sdb1
        Device: 8:17
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sdb9
        Device: 8:25
        Read-Only: false
        Size: 8.39MB
  Disk 4:
    NUMA node: 0
    ID: sdc
    Device: 8:32
    Model: WDC WD60EFRX-68M
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sdc1
        Device: 8:33
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sdc9
        Device: 8:41
        Read-Only: false
        Size: 8.39MB
  Disk 5:
    NUMA node: 0
    ID: sdd
    Device: 8:48
    Model: WDC WD60EFRX-68L
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sdd1
        Device: 8:49
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sdd9
        Device: 8:57
        Read-Only: false
        Size: 8.39MB
  Disk 6:
    NUMA node: 0
    ID: sde
    Device: 8:64
    Model: CT1000MX500SSD1
    Type: scsi
    Size: 1.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sde1
        Device: 8:65
        Read-Only: false
        Size: 52.43MB
      - Partition 2
        ID: sde2
        Device: 8:66
        Read-Only: false
        Size: 1.07GB
      - Partition 3
        ID: sde3
        Device: 8:67
        Read-Only: false
        Size: 17.18GB
      - Partition 4
        ID: sde4
        Device: 8:68
        Read-Only: false
        Size: 4.29GB
      - Partition 5
        ID: sde5
        Device: 8:69
        Read-Only: false
        Size: 977.60GB
  Disk 7:
    NUMA node: 0
    ID: sdf
    Device: 8:80
    Model: WDC WD60EFRX-68M
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sdf1
        Device: 8:81
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sdf9
        Device: 8:89
        Read-Only: false
        Size: 8.39MB
  Disk 8:
    NUMA node: 0
    ID: sdg
    Device: 8:96
    Model: WDC WD60EFRX-68M
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sdg1
        Device: 8:97
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sdg9
        Device: 8:105
        Read-Only: false
        Size: 8.39MB
  Disk 9:
    NUMA node: 0
    ID: sdh
    Device: 8:112
    Model: WDC WD60EFRX-68M
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sdh1
        Device: 8:113
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sdh9
        Device: 8:121
        Read-Only: false
        Size: 8.39MB
  Disk 10:
    NUMA node: 0
    ID: sdi
    Device: 8:128
    Model: WDC WD60EFRX-68M
    Type: scsi
    Size: 6.00TB
    Read-Only: false
    Removable: false
    Partitions:
      - Partition 1
        ID: sdi1
        Device: 8:129
        Read-Only: false
        Size: 6.00TB
      - Partition 9
        ID: sdi9
        Device: 8:137
        Read-Only: false
        Size: 8.39MB

Control over uid, gid and cwd during command execution

It is now possible to specify what user id (uid), group id (gid) or current working directory (cwd) to use for a particular command. Note that user names and group names aren’t supported.

stgraber@castiana:~$ lxc exec c1 --user 1000 --group 1000 --cwd /tmp -- bash
ubuntu@c1:/tmp$ id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu)
ubuntu@c1:/tmp$ 

Quota support for custom storage volumes on dir backend

When using a storage pool backend by the dir driver and with a source path that supports filesystem project quotas, it is now possible to set disk usage limits on custom volumes.

stgraber@castiana:~$ sudo truncate -s 100G test.img
stgraber@castiana:~$ sudo mkfs.ext4 test.img
mke2fs 1.45.2 (27-May-2019)
Discarding device blocks: done                            
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: 50ee78cb-e4e3-4e09-b38b-3fb06c6740a4
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done   
stgraber@castiana:~$ sudo tune2fs -O project -Q prjquota test.img
tune2fs 1.45.2 (27-May-2019)
stgraber@castiana:~$ sudo mkdir /mnt/test
stgraber@castiana:~$ sudo mount -o prjquota test.img /mnt/test
stgraber@castiana:~$ sudo rmdir /mnt/test/lost+found

stgraber@castiana:~$ lxc storage create dir dir source=/mnt/test
Storage pool dir created
stgraber@castiana:~$ lxc storage volume create dir blah
Storage volume blah created
stgraber@castiana:~$ lxc storage volume attach dir blah c1 blah /blah

stgraber@castiana:~$ lxc exec c1 -- df -h /blah
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop32      98G   61M   93G   1% /blah
stgraber@castiana:~$ lxc storage volume set dir blah size 10GB
stgraber@castiana:~$ lxc exec c1 -- df -h /blah
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop32     9.4G  4.0K  9.4G   1% /blah

Bugs fixed

  • client: Move to units package
  • doc: Fix underscore escaping
  • doc/devlxd: Fix path to host’s communication socket
  • doc/README: Add basic install instructions
  • doc/README: Update linker flags
  • i18n: Update translations from weblate
  • i18n: Update translation templates
  • lxc: Fix renaming storage volume snapshots
  • lxc: Move to units package
  • lxc/copy: Always strip volatile.last_state.power
  • lxc/export: Expire the backup after 24 hours
  • lxd: Better handle bad commands
  • lxd: Fix renaming volume snapshots
  • lxd: Move to units package
  • lxd: Use RunCommandSplit when needed
  • lxd/api: Update handler funcs to take nodeRefreshFunc
  • lxd/cluster: Always return node list on rebalance
  • lxd/cluster: Better handle DB node removal
  • lxd/cluster: Export some heartbeat code
  • lxd/cluster: Perform heartbeats only on the leader
  • lxd/cluster: Update HandlerFuncs calls in tests
  • lxd/cluster: Update heartbeat test to pass last leader heartbeat time
  • lxd/cluster: Update tests not to use KeepUpdated in tests
  • lxd/cluster: Use correct node id on promote
  • lxd/cluster/gateway: Update to receive new heartbeat format
  • lxd/cluster/heartbeat: Add new heartbeat request format
  • lxd/cluster/heartbeat: Compare both ID and Address
  • lxd/cluster/heartbeat: Fix bug when nodes join during heartbeat
  • lxd/cluster/heartbeat: Remove unneeded go routine (as context does cancel)
  • lxd/cluster/heartbeat: Use current timestamp for DB record
  • lxd/cluster/membership: Update Join to send new heartbeat format
  • lxd/cluster/upgrade: Remove KeepUpdated and use MayUpdate directly
  • lxd/cluster/upgrade: Remove unused context
  • lxd/cluster/upgrade: Remove unused context from test
  • lxd/containers: Add allocateNetworkFilterIPs
  • lxd/containers: Add error checking for calls to networkClearLease
  • lxd/containers: Add SR-IOV parent restoration
  • lxd/containers: Better detect and alert on missing br_netfilter module
  • lxd/containers: Combine state updates
  • lxd/containers: Consistent comment endings
  • lxd/containers: Disable auto mac generation for sriov devices
  • lxd/containers: Ensure dnsmasq config refresh if bridge nic added/removed
  • lxd/containers: Ensure that sriov devices use volatile host_name for removal
  • lxd/containers: Fix return value of detachInterfaceRename
  • lxd/containers: Fix showing host_name of veth pair in lxc info
  • lxd/containers: Fix snapshot restore on ephemeral
  • lxd/containers: Fix template handling
  • lxd/containers: generateNetworkFilterEbtablesRules to accept IP info as args
  • lxd/containers: generateNetworkFilterIptablesRules to accept IP info as args
  • lxd/containers: Improve comment on DHCP host config removal
  • lxd/containers: Made detection of veth nic explicit
  • lxd/containers: Move all nic hot plug functionality into separate functions
  • lxd/containers: Move container taring logic into standalone class
  • lxd/containers: Move network filter setup into setupHostVethDevice
  • lxd/containers: Move stop time nic device detach into cleanupNetworkDevices
  • lxd/containers: Remove containerNetworkKeys as unused
  • lxd/containers: Remove ineffective references to containerNetworkKeys
  • lxd/containers: Remove the need for fixed veth peer when doing mac_filtering
  • lxd/containers: Remove unused arg from setNetworkRoutes
  • lxd/containers: Separate cleanupHostVethDevices into cleanupHostVethDevice
  • lxd/containers: Speed up startCommon a bit
  • lxd/containers: Update removeNetworkFilters to use dnsmasq config
  • lxd/containers: Update setNetworkFilters to allocate IPs if needed
  • lxd/containers: Update setupHostVethDevice to wipe old DHCPv6 leases
  • lxd/containers: Use current binary for early hooks
  • lxd/daemon: Update daemon to support node refresh tasks from heartbeat
  • lxd/db: Add Gateway.isLeader() function
  • lxd/db: Better formatting
  • lxd/db: Bootstrap dqlite for new servers
  • lxd/db: Check dqlite version of connecting nodes
  • lxd/db: Check TLS cert in raft connection handler
  • lxd/db: Conditionally check leadership in dqlite dial function
  • lxd/db: Convert tests to the new go-dqlite API
  • lxd/db: Copy network data between TLS Go conn and Unix socket
  • lxd/db: Custom dqlite dial function
  • lxd/db: Don’t use the db in legacy patch 12
  • lxd/db: Drop dependencies on hashicorp/raft
  • lxd/db: Drop hashicorp/raft setup code
  • lxd/db: Drop the legacy /internal/raft endpoint
  • lxd/db: Drop unused hashicorp/raft network transport wrapper
  • lxd/db: Fix comment
  • lxd/db: Fix import
  • lxd/db: Fix lint
  • lxd/db: Get information about current servers from dqlite
  • lxd/db: Ignore missing WAL files when reproducing snapshots
  • lxd/db: Improve gateway standalone test
  • lxd/db: Instantiate dqlite
  • lxd/db: Move container list from containersShutdown into containersOnDisk
  • lxd/db: No need to shutdown hashicorp/raft instance
  • lxd/db: Only use the schema db transaction in legacy patches
  • lxd/db: Perform data migration to dqlite 1.0 format
  • lxd/db: Retry copy-related errors
  • lxd/db: Return HTTP code 426 (Upgrade Required) if peer has old version
  • lxd/db: Set max open conns before running schema upgrades
  • lxd/db: Translate address of first node
  • lxd/db: Turn patchShrinkLogsDBFile into a no-op
  • lxd/db: Update comment
  • lxd/db: Update docstring
  • lxd/db: Update unit tests
  • lxd/db: Use dqlite leave primitive
  • lxd/db: Use dqlite’s join primitive
  • lxd/db: Use ID instead of address to detect initial node
  • lxd/db: Wire isLeader()
  • lxd/instance_types: Improve errors
  • lxd/main: Fix debug mode flag to actually enable debug mode
  • lxd/main: Fix test runner by allowing empty command arg
  • lxd/main_callhook: Don’t call /1.0
  • lxd/main_checkfeature: Remove unused variable
  • lxd/main_forkmknod: Check for MS_NODEV
  • lxd/main_forkmknod: Correctly handle shiftfs
  • lxd/main_forkmknod: Ensure correct device ownership
  • lxd/main_forkmknod: Remove unused variables
  • lxd/main_forkmknod: Simplify
  • lxd/main_forknet: Clean up forknet detach error logging and output
  • lxd/networks: Add DHCP range functions
  • lxd/networks: Add --dhcp-rapid-commit when dnsmasq version > 2.79
  • lxd/networks: Add IP allocation functions
  • lxd/networks: Add networkDeviceBindWait function
  • lxd/networks: Add networkDHCPv4Release function
  • lxd/networks: Add networkDHCPv6Release function and associated packet helper
  • lxd/networks: Add networkGetVirtFuncInfo function
  • lxd/networks: Add networkUpdateStaticContainer
  • lxd/networks: Add SR-IOV related PCI bind/unbind helper functions
  • lxd/networks: Allow querying state on non-managed
  • lxd/networks: Call networkUpdateForkdnsServersTask from node refresh
  • lxd/networks: Cleaned up the device bind/unbind functions for SR-IOV
  • lxd/networks: Fix bug preventing 3rd party routes restoration on startup
  • lxd/networks: Remove unused context
  • lxd/networks: Remove unused state.State from networkClearLease()
  • lxd/networks: Start dnsmasq with --no-ping option to avoid delayed writes
  • lxd/networks: Update networkClearLease to support a mode flag
  • lxd/networks: Update networkClearLease to use DHCP release helpers
  • lxd/networks: Update networkUpdateStatic to use existing config for filters
  • lxd/networks: Update networkUpdateStatic to use networkUpdateStaticContainer
  • lxd/networks: Update refreshForkdnsServerAddresses to run from node refresh
  • lxd/patches: Handle btrfs snapshots properly
  • lxd/proxy: Fix error handling for unix
  • lxd/rsync: Allow disabling xattrs during copy
  • lxd/rsync: Don’t double-specify --xattrs
  • lxd/seccomp: Add insertMount() helpers
  • lxd/seccomp: Cause a default message to be sent
  • lxd/seccomp: Check permissions before handling mknod via device injection
  • lxd/seccomp: Cleanup + simplify
  • lxd/seccomp: Define __NR_mknod if missing
  • lxd/seccomp: Ensure correct owner on __NR_mknod{at}
  • lxd/seccomp: Fix error reporting
  • lxd/seccomp: Handle compat arch syscalls
  • lxd/seccomp: Handle new liblxc seccomp notify updates
  • lxd/seccomp: Retry with mount hotplug
  • lxd/seccomp: Rework missing syscall number definitions
  • lxd/seccomp: Simplify and make more secure
  • lxd/storage: Fix copies of volumes with snapshots
  • lxd/storage/ceph: Fix snapshot deletion cleanup
  • lxd/storage/dir: Allow size limits on dir volumes
  • lxd/storage/dir: Fix quotas on dir
  • lxd/storage/dir: Fix some deletion cases
  • lxd/storage/lvm: Adds space used reporting for LVM thinpools
  • lxd/task/group: Improve locking of Start/Add/Stop functions to avoid races
  • Makefile: Update make deps to build also libco and raft
  • shared: Add volatile key suffixes for SR-IOV
  • shared: Better handle stdout/stderr in RunCommand
  • shared: Move to units package
  • shared/netutils: Add lxc_abstract_unix_recv_fds_iov()
  • shared/netutils: Fix bug with getting container PID
  • shared/termios: Fix port to sys/unix
  • shared/units: Move unit functions
  • tests: Add check for dnsmasq host config file removal on container delete
  • tests: Add DHCP lease release tests
  • tests: Add p2p test for adding new nic rather than updating existing
  • tests: Add SR-IOV tests
  • tests: Add test for dnsmasq host config update when nic added/removed
  • tests: Add tests for security.mac_filtering functionality
  • tests: Always pass --force to stop/restart
  • tests: Don’t leak remotes in tests
  • tests: Fix bad call to spawn_lxd
  • tests: Fix typo in test/suites/clustering.sh
  • tests: Increase nic bridge ping sleep time to 2s
  • tests: Make new shellcheck happy
  • tests: Make shellcheck happy
  • tests: Optimize ceph storage test
  • tests: Properly scope LXD_NETNS
  • tests: Remove un-needed LXD_DIR
  • tests: Re-order tests a bit
  • tests: Scope cluster LXD variables
  • tests: Test renaming storage volume snapshots
  • tests: Update godeps
  • tests: Update nic bridge tests to check for route restoration
  • various: Removes use of golang.org/x/net/context in place of stdlib context
  • vendor: Drop vendor directory

Try it for yourself

This new LXD release is already available for you to try on our demo service.

Downloads

The release tarballs can be found on our download page.

Binary builds are also available for:

  • Linux: snap install lxd
  • MacOS: brew install lxc
  • Windows: choco install lxc
4 Likes

LXD 3.15 has been available in the candidate channel for our snap users ever since its release late on Thursday. Our current plan is to release this to stable late on Monday or early on Tuesday this week.

1 Like

In case of something going wrong with the update for some users, it means that reverting to previous snap version will not work. So it will be necessary to grab this automated backup and restore it, and presumably this will not be automatic. Is there some instructions for that ?

There’s a short mention of how it works here (see the very last paragraph, which is truncated here):

Essentially you’ll have to restore the database/global.bak, then start a 3.14 lxd binary that understand the old format.

Thanks, I can see the db.bin under
/var/snap/lxd/common/lxd/database/global
indeed.

After a lot of additional testing and a number of small fixes and improvements to the database code, we have now started rolling out LXD 3.15 to users of the stable channel.

Note that it is generally not possible to rollback to LXD 3.14, so avoid using snap rollback, instead, should something go wrong, first attempt to restart LXD with systemctl reload snap.lxd.daemon. If that doesn’t help, please file a bug at https://github.com/lxc/lxd/issues where we can quickly diagnose any issue and provide a hotfix or targeted instructions if needed.

So far the only issues we’ve noticed have been related to specific kernel versions combined with less usual filesystems and direct I/O support.
Specifically, the two issues that have been identified and resolved were LXD on ZFS 0.8 with Linux 5.2 (partial direct I/O support) and LXD on shiftfs (direct I/O detection issue).

26 posts were split to a new topic: Cluster node appears offline after upgrade to 3.15

“trtacking” → “tracking” ?

General question regarding possible future updates to the RBAC implementation with LXD. Will RBAC being able to be enforced with other services outside of the external Canonical RBAC service? We currently use pbis for our centralized authentication and wondering if there is a way to break down permissions similar to how it works with canonicals rbac service

This is pretty unlikely. It’s annoying enough to deal with one RBAC implementation and as that’s the one which comes for free for users that get Ubuntu/LXD support from Canonical, it’s the one we’ll be focusing on for now.