Is Incus ready for prime time usage, and also best way to move a LXD 5.19 to it

So I am interested in moving over. Where do I start?

1 Like

Hey there,

Yeah, despite the 0.x version, Incus is perfectly ready for use for anyone who’s using the monthly LXD releases. The only people who should really keep waiting are those using the LTS versions of LXD as it will be another 4-5 months before Incus releases an LTS too.

Incus 0.3 can migrate data from LXD 5.19 just fine and I’ve now gone through that process for all the systems I manage (including clusters, but those are a bit trickier).

You can find the installation instructions here First steps with Incus - Incus documentation which will then send you to Migrating from LXD - Incus documentation for the data migration.

Basically, install the incus package but don’t initialize Incus, instead run lxd-to-incus to have your LXD data moved over. Once that’s done, it will offer to remove the LXD package for you and you’ll be all done.

2 Likes

Stéphane Graber,

Nice to see you still in the game, and better on your own horse… My concern is that I have a cluster of 4 and would probably want to move one server at a time. I am thinking of starting a new Incus server and then moving them one by one, or am I thinking this out too much and just install them and they will install parallel to lxd? What are the chances that installing Incus will break the LXC/LXD?

Anything exciting like OVN or Ceph going on with your cluster?

If not, then lxd-to-incus should handle it without problems, even the OVN and Ceph case is handled these days but you’ll probably want to be a bit more careful with those around :slight_smile:

The way a cluster migration works is you need to install Incus on all systems, then on the first one, you run lxd-to-incus which will make sure your cluster, instances, … everything looks good.
It then asks you if you want to continue at which point it will shutdown all instances across the entire cluster, then will move that first server over to Incus and provide you with a command to run on all the others. Once they have all run the command, the cluster will be fully migrated and LXD will get removed.

Now if you don’t want to go down that path. You could probably evacuate one of the servers, remove LXD from it, install Incus, create a new cluster (of one server at that stage), move some stuff over, then evacuate another server, remove LXD, install Incus, join it to the cluster, … keep going until all servers have been done.

Normally you can install and run both LXD and Incus on the same system, but because of them using the same ports on the network, it’s not possible to have both of them clustered at the same time, so you can’t create a full Incus cluster on the exact same machines and run both clusters at the same time to migrate instances.

Well, that’s not completely true, you could get a different IP on each system or force Incus to use a non-standard port, but I wouldn’t really recommend going down that path.

To give some context, I moved a production cluster of 3 servers using both OVN and Ceph as well as hosting a dozen different projects and 100+ instances just a couple of days ago.

That took around 20min of total downtime and came back up fine. I just had to manually fix a small issue with OpenVswitch (for which I’ve sent a fix to lxd-to-incus), though that was because I was in a rush and decided not to reboot the servers after the migration, if I had rebooted them, this particular issue would not have occurred.

And that was my 3rd cluster with a similar setup that I migrated in the past few weeks.

What is the lowest version of Ubuntu that Incus will work on? I forgot my servers where on 18.04. I have been updating them but don’t want to upgrade them past the minimum for now.

I’m sure Incus could be built and made to work on 18.04, but speaking of the packages that I produce for Ubuntu and Debian, the minimum version I support for those is 20.04 LTS.