LXD 3.0.0 has been released


(Stéphane Graber) #5

I updated the instructions in the initial post to use a two-stage install, first installing the core snap and then installing the LXD snap. This is a workaround for a snapd regression that’s being discussed here:

(Stéphane Graber) #6

And reverted the change as the core snap has been reverted and that issue should now be gone.

How to create a template to run in LXD i.e. VyOS, or FreePBX?
(Brian Mullan) #7

The release announcement mentions lxd-p2c and there is the video but neither in the announcement or the video did it say how to build or get the lxd-p2c all I could spot was “after building a copy of the tool”.:

Physical to container migration with lxd-p2c
A new tool called lxd-p2c makes it possible to import a system’s filesystem into a LXD container using the LXD API.

After building a copy of the tool, the resulting binary can be transferred to any system that you want to turn into a container. Point it to a remote LXD server and the entire system’s filesystem will be transferred over the LXD migration API and a new container be created.

Even the github for lxd-p2c has no readme file describing what a potential user needs to do?

Is there more documentation somewhere?


(Stéphane Graber) #8
go get -v -x github.com/lxc/lxd/lxd-p2c
lxd-p2c --help

(Alexander Karelas) #9

I would like to complain because I find the instructions on section “Availability as a snap package from upstream” to be not particularly clear.

  • Suppose 3.1 is out. What should I type to install v3.1 on snappy and have it stay there (maybe except for bugfixes and security updates)? I.e. not go to 3.2. How do I do that? I can’t figure that out from the documentation.

  • Suppose I want to stay on v3.0 except for bugfixes and security updates, and not go to 3.1 ever. How do I do that? Is the right command “snap install lxd --channel=3.0”? Or would v3.1 (when it comes out) be considered “bugfixes and security updates”, and the system would still go to 3.1? The documentation is not clear about that.

  • Suppose I want to get v3.0, and migrate to v3.5 whenever I want to (and not let snappy do that automatically for me, after just 2 days of testing, onlywhich I might not find enough). How do I do that?

  • Suppose I want to get v3.01, and automatically migrate to v3.1, then v3.2, etc, up to v3.whatever, but NOT to v4.0. How do I do that? I don’t think that’s at all clear in this message.


(Stéphane Graber) #10

Feature releases like 2.21, 3.1, 3.2, … are only supported until the next one comes out, so typically about a month. Feature releases is what you get in the “latest” track (what you get when you don’t give a track name).

There is no supported way for you to stick to 3.1 once 3.2 is out and in the stable channel. Well, you can turn off all snap updates with some firewalling or systemd trickery.

To stick to 3.0, then get the 3.0.1, 3.0.2, … bugfix releases, use the 3.0/stable channel. If you then want to upgrade to whatever is the latest feature release, you can use snap refresh with --channel=stable to switch to the latest track. Note that downgrades aren’t supported so you won’t be able to go back to the 3.0 track after that.

Once 4.0 is out in a couple of years, it will once again land in both the latest and 4.0 tracks as 3.21 or whatever the last 3.x feature release will go unsupported.

The tldr here is that the stable channel of all the snap tracks we maintain are supported. A LXD release that you can’t find in any of those isn’t supported anymore.

(Alexander Karelas) #11

Would you say that the “latest” track is safe for use in a production server, where it’s important that the database in the container doesn’t get corrupted?

If not, don’t you find it could be worth increasing the time to release to stable, from just two days to one week, so that people can use it on their production systems as well?

(Stéphane Graber) #12

We have a lot of production users running latest/stable. We certainly tend to recommend those users that also have a staging environment to use latest/candidate on those systems so that they can report anything that looks wrong back to us immediately.

The time between something hitting candidate and something hitting stable is also somewhat flexible. When cherry-picking critical fixes, I tend to push to stable as soon as our automated testing confirms all our tests pass on all distributions. It’s really when pushing a new LXD release that we wait a few days to get a few days worth of early adopter feedback, cherry-pick fixes if needed and then push the new release out to stable users.

LXD upgrades don’t normally restart any container and if they do restart containers for some reason (most likely would be a crash during DB migration), a clean shutdown of the containers will be performed. In fact, unless a crash happens during migration, LXD upgrades even between major versions happens with no container downtime at all. Only the LXD API becomes available for a few seconds as it restarts.

At the time we released LXD 3.0 to the stable channel, we had over 7000 deployments using 2.21 from latest/stable, all of those have now been auto-updated to 3.0 and we’ve only heard back from two users that hit an upgrade crash which caused containers to be shutdown and restarted on their systems. So far we’ve not heard of anyone running into data corruption issue on such an update. The worst we’ve had in this regard are corrupted sqlite3 LXD database when a system kernel panics or looses power but we have mechanisms we can use to rebuild the database in that case too.

(George) #13

How can one migrate from LXD 2.0.11 to LXD 3.0.0? I’ve installed (but haven’t lxd init yet) LXD 3.0.0 via snap on a system that has containers that were created with LXD 2.0.11. Both versions now exist on the same host OS.

  1. Is there a way to get LXD 3.0 to “see” the 2.0 containers or a way to move them?
  2. Will /snap/bin/lxd init overwrite any settings from LXD 2.0.11 or is everything contained within the /snap/lxd directories?
  3. I want to be running only LXD 3.0 with the existing LXD 2.0 containers. Was is the best way to do this?

Thank you


By installing the snap, you get an additional program called “lxd.migrate”. Run it and the migration will happen automatically.

(Alexander Karelas) #17

When will the documentation on https://lxd.readthedocs.io/en/latest/ be completed?

(Jan Vokas) #18

And regular deb packages somewhere? Or packages from backports are now “deprecated/not available”?
i see lxcfs and lxc 3.x packages, but lxd only 2.21.
Anyway, good job!

(Stéphane Graber) #19

There are LXD 3.0 debs in Ubuntu 18.04. We will backport them (and lxc/lxcfs 3.0) to 16.04 and 17.10 but are waiting for things to stabilize a bit first (dealing with initial set of bug reports).

(Jair Bolivar) #20

Good day @stgraber,

I just have a simple question, can the tool lxd-p2c convert a virtual box VM into an LXC/LXD container? or it is just working on physical machines only?

We have a couple of VMs virtualbox format that would really like to move to LXC/LXD…

Thank you,

Problems trying to use nvidia.runtime with snap LXD 3.0.0
(Stéphane Graber) #21

Yep, will work fine from any Linux system that’s got rsync installed. We had some users use it to convert OpenVZ to LXD containers.

(Jair Bolivar) #22

Wow! that is really cool!

I will definitely let you know how it goes for us. We will be converting a number of virtual machines from Oracle virtual-box as well as a KVM VM. Thank you again for your great work! Looking forward to testing it.


(Tom) #23

OVZ to LXD? Hmm…

Is it possible to build a ‘safe’ cluster with public dedicated servers, for example OVH? (without LAN) That would be cool and a nice replacement for OVZ Webpanel.

(Jair Bolivar) #24

Hello @stgraber

Once again, thank you very much for the great work. I went through the process of converting a VirtualBox Linux VM into an lxd container:

1. I got the software “go”, “git” and “gcc” installed on the system I will be converting:

jair@budgie3:~/go/bin$ ./lxd-p2c
Physical to container migration tool

This tool lets you turn any Linux filesystem (including your current one)
into a LXD container on a remote LXD host.

It will setup a clean mount tree made of the root filesystem and any
additional mount you list, then transfer this through LXD’s migration
API to create a new container from it.

The same set of options as lxc launch are also supported.

lxd-p2c […] [flags]

-c, --config Configuration key and value to set on the container
-h, --help Print help
-n, --network Network to use for the container
–no-profiles Create the container with no profiles applied
-p, --profile Profile to apply to the container
-s, --storage Storage pool to use for the container
-t, --type Instance type to use for the container
–version Print version number

2. I ran the command to convert/transfer the image:

jair@budgie3:~/go/bin$ sudo ./lxd-p2c -p bridgeprofile budgie3 /
[sudo] password for jair:
Generating a temporary client certificate. This may take a minute…
Certificate fingerprint: 92276fa33568d0cd6624977ba03ce621a48211565feae9c15389655d7f54d58b
ok (y/n)? y
Admin password for
Transferring container: budgie3: 50.50MB (7.16MB/s)

After the image got converted and transferred to the lxd server I tried to connect to TeamViewer and also nxserver (nomachine) but either failed, I cannot get the same window manager login GUI that welcomes me when I connect to the VirtualBox original VM. What I get instead is a message about the session is not started, asking if nxserver can create a display session for me blah blah blah.

Any advice on how to preserve the functionality of this two applications (TeamViewer and nxserver) on the containers after being converted will be appreciated.

Thank you for all you do!

(Jair Bolivar) #25

Then when accepting:

I will really appreciate any tip or suggestions to get this issue fixed.

Thank you,

(Jair Bolivar) #26


I believe this will be related to that other question about the container conversion lxc-p2c, therefore, we can mark this one as solved. I will continue my investigation with TeamViewer and nomachine for the containers (headless gui) set up.

Thank you for all the hard work.