LXD 3.0.0 has been released

Introduction

The LXD team is pleased to announce the release of LXD 3.0.0!
This is the second LTS release for the LXD project and will be supported until June 2023.

New features (since 2.21)

LXD 3.0 is going to be our main LTS release for the next two years, receiving frequent bugfix updates backported from the current feature release.

We spent over 3 months since the LXD 2.21 release to land all the features we wanted to see in LXD 3.0 and clean up a lot of existing code to make it maintainable for the duration of the LTS, below are the main highlights.

Clustering

The biggest new feature for LXD 3.0 is the introduction of clustering support.
This allows for identically configured LXD servers to be joined together as part of a cluster, appearing to the outside world as one big LXD server.

The LXD database is replicated using dqlite (a combination of sqlite3 and raft), making it so that 3 of the cluster members have a copy of the entire database at any given time.

No special system configuration or services are required to setup LXD clustering, all you need is a few available machines or VMs with similar network and storage properties, then lxd init will walk you through the process of creating the cluster and then joining some servers into it.

Here’s a short recording of setting up a LXD cluster on 3 nodes using MAAS to allocate machines and networks:

The main contributor for this feature, Free Ekanayaka also gave a longer presentation on LXD clustering at FOSDEM 2018 which you can check out here:

You can also check the documentation here: Linux Containers - LXD - Has been moved to Canonical

Physical to container migration with lxd-p2c

A new tool called lxd-p2c makes it possible to import a system’s filesystem into a LXD container using the LXD API.

After building a copy of the tool, the resulting binary can be transferred to any system that you want to turn into a container. Point it to a remote LXD server and the entire system’s filesystem will be transferred over the LXD migration API and a new container be created.

The main contributor for this feature, Stéphane Graber, also gave a presentation about it at FOSDEM 2018, the video is available here:

Support for NVIDIA runtime passthrough

A common issue for those using NVIDIA GPUs inside containers is the requirement to keep the userspace libraries in sync with the kernel driver.

This is made particularly difficult if the container’s owner isn’t also the host’s owner as the two are then likely to become out of sync at any time and without warning.

A newly introduced nvidia.runtime container configuration key, combined with a copy of the nvidia-container-cli tool and liblxc 3.0 now makes it possible to automatically detect all the right bits on the host system and pass them into the container at boot time.

This lets you save a lot of space and greatly simplifies maintenance.

asciicast

Hotplug support for unix-char and unix-block devices

A new required property has been added to all unix type devices. When set to false, LXD will wait until the requested path is available on the host before automatically passing it into the container.

This allows for something like this:

lxc config device add c1 ttyUSB0 unix-char path=/dev/ttyUSB0 required=false

The c1 container will now get access to that USB serial device as soon as it’s plugged into the system and it will automatically be removed from the container when unplugged.

Local copy/move of storage volumes

It’s now possible to copy and move custom storage volumes between storage pools.

stgraber@castiana:~$ lxc storage volume copy ssd/example default/example
Storage volume copied successfully!

stgraber@castiana:~$ lxc storage volume move ssd/example default/example
Storage volume moved successfully!

Remote transfer of custom storage volumes

A new storage migration API was introduced allowing for the exact same operations as shown above to work between LXD servers as well, using the same syntax as would be used for container migration.

proxy device type to forward network connections

The new proxy device type allows for forwarding TCP connections between host and containers.

For example, to forward any connection to port 80 on the host to container c1 on it’s localhost IP on port 80:

lxc config device add c1 http proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80

Events through /dev/lxd

The REST API endpoint exposed inside the container can now be used to receive events whenever a configuration key or device is added, removed or modified.

root@c1:~# curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" --header "Sec-WebSocket-Version: 13" --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" --unix-socket /dev/lxd/sock lxd/1.0/events
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: qGEgH3En71di5rrssAZTmtRTyFk=

{"metadata":{"key":"user.foo","old_value":"","value":"bar"},"timestamp":"2018-04-02T23:58:54.433992023-04:00","type":"config"}
{"metadata":{"action":"added","config":{"path":"/home","source":"/home","type":"disk"},"name":"home"},"timestamp":"2018-04-02T23:59:25.65007597-04:00","type":"device"}

Switched command line parser

Our previous command line parser, gnuflag, didn’t match our command line structure particularly well, causing confusing help and error messages. We have now transitioned to using the cobra command line parser, joining a number of other major Go projects.

Process count column in lxc list

An optional “processes” column was added to lxc list showing the number of processes running inside the container.

stgraber@castiana:~$ lxc list -c nsN c1
+------+---------+-----------+
| NAME |  STATE  | PROCESSES |
+------+---------+-----------+
| c1   | RUNNING | 33        |
+------+---------+-----------+

lxc storage info sub-command

A new info subcommand was added as a way to get easy human readable information about a storage pool:

stgraber@castiana:~$ lxc storage info ssd
info:
  description: ""
  driver: dir
  name: ssd
  space used: 9.29GB
  total space: 173.12GB
used by: {}

Option for alternate IPv4 gateway

A new ipv4.dhcp.gateway option is now available for LXD managed bridges. This lets you set a gateway other than LXD itself and can be useful when mixing LXD bridges with physical networks.

Support for symlinks in file transfer

When doing recursive file transfers including some symlinks, those will be properly created as symlinks on the target, rather than the content of the file they point to be pushed or pulled.

Pretty rendering of log entries in lxc monitor

The LXD log messages have always been available over the event interface, accessible through the lxc monitor tool. However those raw events were sometimes pretty hard to read.

The command line client now knows how to filter and re-format those log events to look exactly as if you were looking at the server’s log output.

stgraber@castiana:~$ lxc monitor --pretty --loglevel=info --type=logging
INFO[04-02|22:57:39] Stopping container                       action=stop created="2018-02-27 18:02:02 -0500 EST" ephemeral=false name=snapcraft stateful=false used="2018-03-29 15:33:05 -0400 EDT"
INFO[04-02|22:57:40] Stopped container                        action=stop created="2018-02-27 18:02:02 -0500 EST" ephemeral=false name=snapcraft stateful=false used="2018-03-29 15:33:05 -0400 EDT"
INFO[04-02|22:57:40] Starting container                       action=start created="2018-02-27 18:02:02 -0500 EST" ephemeral=false name=snapcraft stateful=false used="2018-03-29 15:33:05 -0400 EDT"
INFO[04-02|22:57:41] Started container                        action=start created="2018-02-27 18:02:02 -0500 EST" ephemeral=false name=snapcraft stateful=false used="2018-03-29 15:33:05 -0400 EDT"

lxc network list-leases sub-command

DHCP leases on LXD managed bridges can now be queried directly through the API and the command line tool.

stgraber@castiana:~$ lxc network list-leases lxdbr0
+-----------+-------------------+---------------+---------+
| HOSTNAME  |    MAC ADDRESS    |  IP ADDRESS   |  TYPE   |
+-----------+-------------------+---------------+---------+
| bar       | 00:16:3e:e0:36:3a | 10.166.11.185 | DYNAMIC |
+-----------+-------------------+---------------+---------+
| snapcraft | 00:16:3e:be:f1:87 | 10.166.11.120 | DYNAMIC |
+-----------+-------------------+---------------+---------+

lxc alias command

It’s now possible to list, create and delete command line aliases directly from the command line tool, rather than having to manually edit the configuration file.

stgraber@castiana:~$ lxc alias list
+--------+-------------------------------------------+
| ALIAS  |                  TARGET                   |
+--------+-------------------------------------------+
| delete | delete -f                                 |
+--------+-------------------------------------------+
| ls     | list -c ns46S                             |
+--------+-------------------------------------------+
| ubuntu | exec @ARGS@ -- sudo --login --user ubuntu |
+--------+-------------------------------------------+

lxc config device override sub-command

To override a particular option of a device that’s inherited from a profile, such as the default network interface, you need to create a device that’s local to the container and uses the same name as the one from the profile. This device will then take priority over the one coming from the profile and let you set any configuration you want.

To simplify this process, this can all be done now by using lxc config device override, passing it the container, device and configuration keys that should be changed.

stgraber@castiana:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1
stgraber@castiana:~$ lxc config device override c1 eth0 ipv4.address=10.166.11.42
Device eth0 overridden for c1
stgraber@castiana:~$ lxc restart c1
stgraber@castiana:~$ lxc list c1
+------+---------+---------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |        IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+---------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.166.11.42 (eth0) | 2001:470:b368:4242:216:3eff:fed1:aff3 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+----------------------------------------------+------------+-----------+

Operations now have a description

A new description field is now present in the API for all background operations and is exposed in the command line tool.

stgraber@castiana:~$ lxc operation list
+--------------------------------------+-----------+---------------------+---------+------------+----------------------+
|                  ID                  |   TYPE    |     DESCRIPTION     | STATUS  | CANCELABLE |       CREATED        |
+--------------------------------------+-----------+---------------------+---------+------------+----------------------+
| 343b1700-c0bd-44fa-8b1f-e6a8fdb91b42 | WEBSOCKET | Migrating container | RUNNING | NO         | 2018/04/03 02:51 UTC |
+--------------------------------------+-----------+---------------------+---------+------------+----------------------+
| 65494c6e-7643-4ed5-8abf-497e57cfdd5c | WEBSOCKET | Executing command   | RUNNING | NO         | 2018/04/03 02:51 UTC |
+--------------------------------------+-----------+---------------------+---------+------------+----------------------+

lifecycle type events

A new event class called lifecycle has been introduced, to provide much easier tracking of what LXD is doing from scripts or other API clients, without having to interpret LXD’s log messages.

stgraber@castiana:~$ lxc monitor --type=lifecycle
metadata:
  action: container-updated
  source: /1.0/containers/bar
timestamp: "2018-04-02T22:53:06.742745596-04:00"
type: lifecycle


metadata:
  action: container-started
  source: /1.0/containers/bar
timestamp: "2018-04-02T22:53:07.234066242-04:00"
type: lifecycle


metadata:
  action: container-shutdown
  source: /1.0/containers/bar
timestamp: "2018-04-02T22:53:19.885795751-04:00"
type: lifecycle


metadata:
  action: container-deleted
  source: /1.0/containers/bar
timestamp: "2018-04-02T22:53:23.813480386-04:00"
type: lifecycle

Requirements

LXD 3.0 now requires Go 1.9 or higher. While it may be possible to build it with an older version at this point, there is no guarantee that we won’t start making use of newer Go functions in later bugfix releases.

Support and upgrade

LXD 3.0.0 will be supported until June 2023 and our current LTS release, LXD 2.0 will now switch to a slower maintenance pace, only getting critical bugfixes and security updates.

Users of the LXD feature branch (currently at 2.21) should update to 3.0 to keep being supported and get all the bugfixes and new features that LXD 3.0 provides.

Users of the LXD LTS branch (2.0.11) can choose to stay on LXD 2.0 and keep getting critical security fixes or upgrade to LXD 3.0. Those using LXD LTS in critical production environments will likely want to start upgrading a few test systems to LXD 3.0 to check for any potential issue and then upgrade the rest of their machines after LXD 3.0.1 is released.

Availability as a snap package from upstream

The recommended way to install and keep LXD up to date is by using the upstream provided snap package. This ensures that all systems are running the exact same copy of LXD and simplifies the bug reporting and debugging process.

For the LXD snap, 3 tracks are provided:

  • latest (latest LXD feature release, currently 3.0)
  • 2.0 (previous LTS release)
  • 3.0 (current LTS release)

For each of those tracks, 3 channels are maintained:

  • edge (automatic, untested builds from the upstream repository)
  • candidate (the future stable build, available for testing about 48h prior to promotion)
  • stable (the current stable, supported build)

Users who wish to install LXD 3.0 and then get upgraded to 3.1 in a month or so, should use:

snap install lxd

Users who wish to install LXD 3.0 and then only get bugfixes and security updates, should use:

snap install lxd --channel=3.0

If running staging systems, you may want to run those on the candidate channels, using --channel=candidate and --channel=3.0/candidate respectively.

Switching between tracks and channels is possible by using snap refresh but note that LXD doesn’t support downgrading and will fail to start if you attempt it.

Downloads

Contributors

The LXD 3.0.0 release was brought to you by a total of 18 contributors.

8 Likes

Not directly mentioned above, to upgrade your present version of LXD use either sudo (or su) and do the command $ sudo snap refresh (Ubuntu) or as su # snap refresh (Debian).

If you log in into snap, then you can avoid the sudo.

That is,

  1. Run first snap login to log in. If you do not have an account, create one for free at https://login.ubuntu.com/.
  2. Then, you can perform the snap commands without the sudo.

I updated the instructions in the initial post to use a two-stage install, first installing the core snap and then installing the LXD snap. This is a workaround for a snapd regression that’s being discussed here:
https://forum.snapcraft.io/t/auto-connected-interfaces-missing-on-initial-snap-install/4850

And reverted the change as the core snap has been reverted and that issue should now be gone.

The release announcement mentions lxd-p2c and there is the video but neither in the announcement or the video did it say how to build or get the lxd-p2c all I could spot was “after building a copy of the tool”.:

Physical to container migration with lxd-p2c
A new tool called lxd-p2c makes it possible to import a system’s filesystem into a LXD container using the LXD API.

After building a copy of the tool, the resulting binary can be transferred to any system that you want to turn into a container. Point it to a remote LXD server and the entire system’s filesystem will be transferred over the LXD migration API and a new container be created.

Even the github for lxd-p2c has no readme file describing what a potential user needs to do?

Is there more documentation somewhere?

thanks

2 Likes
go get -v -x github.com/lxc/lxd/lxd-p2c
lxd-p2c --help
2 Likes

I would like to complain because I find the instructions on section “Availability as a snap package from upstream” to be not particularly clear.

  • Suppose 3.1 is out. What should I type to install v3.1 on snappy and have it stay there (maybe except for bugfixes and security updates)? I.e. not go to 3.2. How do I do that? I can’t figure that out from the documentation.

  • Suppose I want to stay on v3.0 except for bugfixes and security updates, and not go to 3.1 ever. How do I do that? Is the right command “snap install lxd --channel=3.0”? Or would v3.1 (when it comes out) be considered “bugfixes and security updates”, and the system would still go to 3.1? The documentation is not clear about that.

  • Suppose I want to get v3.0, and migrate to v3.5 whenever I want to (and not let snappy do that automatically for me, after just 2 days of testing, onlywhich I might not find enough). How do I do that?

  • Suppose I want to get v3.01, and automatically migrate to v3.1, then v3.2, etc, up to v3.whatever, but NOT to v4.0. How do I do that? I don’t think that’s at all clear in this message.

Thanks,

Feature releases like 2.21, 3.1, 3.2, … are only supported until the next one comes out, so typically about a month. Feature releases is what you get in the “latest” track (what you get when you don’t give a track name).

There is no supported way for you to stick to 3.1 once 3.2 is out and in the stable channel. Well, you can turn off all snap updates with some firewalling or systemd trickery.

To stick to 3.0, then get the 3.0.1, 3.0.2, … bugfix releases, use the 3.0/stable channel. If you then want to upgrade to whatever is the latest feature release, you can use snap refresh with --channel=stable to switch to the latest track. Note that downgrades aren’t supported so you won’t be able to go back to the 3.0 track after that.

Once 4.0 is out in a couple of years, it will once again land in both the latest and 4.0 tracks as 3.21 or whatever the last 3.x feature release will go unsupported.

The tldr here is that the stable channel of all the snap tracks we maintain are supported. A LXD release that you can’t find in any of those isn’t supported anymore.

2 Likes

Would you say that the “latest” track is safe for use in a production server, where it’s important that the database in the container doesn’t get corrupted?

If not, don’t you find it could be worth increasing the time to release to stable, from just two days to one week, so that people can use it on their production systems as well?

We have a lot of production users running latest/stable. We certainly tend to recommend those users that also have a staging environment to use latest/candidate on those systems so that they can report anything that looks wrong back to us immediately.

The time between something hitting candidate and something hitting stable is also somewhat flexible. When cherry-picking critical fixes, I tend to push to stable as soon as our automated testing confirms all our tests pass on all distributions. It’s really when pushing a new LXD release that we wait a few days to get a few days worth of early adopter feedback, cherry-pick fixes if needed and then push the new release out to stable users.

LXD upgrades don’t normally restart any container and if they do restart containers for some reason (most likely would be a crash during DB migration), a clean shutdown of the containers will be performed. In fact, unless a crash happens during migration, LXD upgrades even between major versions happens with no container downtime at all. Only the LXD API becomes available for a few seconds as it restarts.

At the time we released LXD 3.0 to the stable channel, we had over 7000 deployments using 2.21 from latest/stable, all of those have now been auto-updated to 3.0 and we’ve only heard back from two users that hit an upgrade crash which caused containers to be shutdown and restarted on their systems. So far we’ve not heard of anyone running into data corruption issue on such an update. The worst we’ve had in this regard are corrupted sqlite3 LXD database when a system kernel panics or looses power but we have mechanisms we can use to rebuild the database in that case too.

How can one migrate from LXD 2.0.11 to LXD 3.0.0? I’ve installed (but haven’t lxd init yet) LXD 3.0.0 via snap on a system that has containers that were created with LXD 2.0.11. Both versions now exist on the same host OS.

  1. Is there a way to get LXD 3.0 to “see” the 2.0 containers or a way to move them?
  2. Will /snap/bin/lxd init overwrite any settings from LXD 2.0.11 or is everything contained within the /snap/lxd directories?
  3. I want to be running only LXD 3.0 with the existing LXD 2.0 containers. Was is the best way to do this?

Thank you

By installing the snap, you get an additional program called “lxd.migrate”. Run it and the migration will happen automatically.

1 Like

When will the documentation on https://lxd.readthedocs.io/en/latest/ be completed?

And regular deb packages somewhere? Or packages from backports are now “deprecated/not available”?
i see lxcfs and lxc 3.x packages, but lxd only 2.21.
Anyway, good job!

There are LXD 3.0 debs in Ubuntu 18.04. We will backport them (and lxc/lxcfs 3.0) to 16.04 and 17.10 but are waiting for things to stabilize a bit first (dealing with initial set of bug reports).

2 Likes

Good day @stgraber,

I just have a simple question, can the tool lxd-p2c convert a virtual box VM into an LXC/LXD container? or it is just working on physical machines only?

We have a couple of VMs virtualbox format that would really like to move to LXC/LXD…

Thank you,

Yep, will work fine from any Linux system that’s got rsync installed. We had some users use it to convert OpenVZ to LXD containers.

1 Like

Wow! that is really cool!

I will definitely let you know how it goes for us. We will be converting a number of virtual machines from Oracle virtual-box as well as a KVM VM. Thank you again for your great work! Looking forward to testing it.

Sincerely,