Request for advice: best way to upgrade server and migrate from LXD to Incus?

Hello

I’m looking for some advice on your thoughts on the best way to migrate a system from:

  • Ubuntu 20.04 LTS + LXD 5.20 (pinned)

to:

  • Debian 12 + Incus 6.0

It’s a little more complicated as the system has 2 zfs pools (a 2TB and 4TB) and they are both full and one of the mirrors’ disks is starting to error, so I’m thinking of upgrading them to 2 disk 8TB mirror (or RAIDZ-1, if I went to 3 disks?)

So the final system will (hopefully) be, unless you persuade me otherwise:

  • Debian 12
  • Incus 6.0
  • 2 x 8TB SATA disks - ZFS - storage/NAS, etc.
  • 2 x 980GB nvme disks - ZFS - containers
  • 2 x 500Gb mirror boot/system disk (ZFS or ext4?)

I think the containers/incus pool should be separate, and nvme based for performance. I’ve noticed performance issues with the containers on the 4TB ZFS spinning disk pool (5400 rpm disks), hence the wish to switch.

Some more background:

  • Most of the containers (syncthing, taskd, paperless-ngx, plexd, postgresql, time machine backup) were created using Ansible playbooks, so they could be re-created.
  • I used LXD storage volumes for the persistent volumes attached to the containers. e.g. syncthing share a volume with paperless.
  • I’d like to move from Ubuntu to Debian for mostly ideological reasons. I’m not a huge fan of snaps and I that seems to be an inexorable trajectory for Canonical/Ubuntu. Also, Debian just feels more community orientated as does Incus, hence my wish to switch.

So how do you think I should approach it? I need to preserve the data. It is backed up, but it will be easiest to just keep it on the 4Tb disks and then zfs send them to a new pool?

The proposed setup sounds good. For the migration, it may be best to first move to Incus without changing anything else. That’s because you can get the exact same Incus version on both Ubuntu and Debian when using GitHub - zabbly/incus: Incus package repository.

Once that’s done, you can then make a backup of /var/lib/incus (after stopping all instances and Incus itself), reinstall the system and restore /var/lib/incus.

That should then get you onto the exact same Incus setup but now on Debian.

Lastly, plug the two new drives and add them as a new pool to Incus, then use incus move and incus storage move to relocate everything from the old pool(s) to the new pool. When done, delete the old pool(s) and remove the disks.

You could also do the storage re-shuffle before backing up /var/lib/incus and reinstalling the system if you prefer.

Hi Stéphane

Thanks for hints. Doing the upgrade to Incus first seems like a good idea, and then I’d never thought of making a full full backup of /var/lib/incus and moving that over to the new system. I’m planning on new SSDs for the boot/system disk (old one is wear-levelling out now) so this should be fairly easy to achieve.

Thanks again; I now feel reasonably confident that this is going to work!

Cheers
Alex.

You may also want to set storage.backups_volume and storage.images_volume so any instance backup or locally stored images get relocated to the storage pool of your choice, this will significantly reduce the size of /var/lib/incus/

Hi again; one further question regarding the migration from LXD to Incus:

I’ve searched the docs and it does say that storage is migrated. What I can’t work out is whether it is moved or copied.

I have quite a bit stored in some shared volumes:

$ lxc storage list
+---------+--------+------------+-------------+---------+---------+
|  NAME   | DRIVER |   SOURCE   | DESCRIPTION | USED BY |  STATE  |
+---------+--------+------------+-------------+---------+---------+
| default | zfs    | pool4t/lxd |             | 22      | CREATED |
+---------+--------+------------+-------------+---------+---------+

$ lxc storage info default
info:
  description: ""
  driver: zfs
  name: default
  space used: 282.50GiB
  total space: 658.59GiB
used by:
  images:
  - 8de71f421b30434c7675ea27766d41b2b83527bb5c75c107dee9844e6ef12489
  instances:
  - backups
...
  profiles:
  - backups
...
  volumes:
  - paperless-ng-data
  - paperless-ng-syncthing
  - postgresql-data
  - syncthing-folders
  - taskd

If it’s copied then I’ll probably run out of disk space, if it’s moved, then that would be okay. Sorry I couldn’t find the information in the docs!

Thanks.

PS. I’ve been spending the time since the last post playing with setting up debian in a VM, etc. before I tackle the real thing!

It will be moved. We move all data specifically to avoid the issue of running out of space :slight_smile:

Thanks very much for the quick reply; I’m feeling more confident that it will go well. I’ll still take a backup, though! :slight_smile: Thanks again.

I got bitten by this one. If you use those .lxd domains (managed network) in your containers, like in a MySQL database for your website (i.e. web.lxd), then after the migration the domains will automatically switch to .incus. Which means that the old domains will not be valid anymore and things will not work easily. The workaround is to switch Incus to use .lxd domains until you fix your configuration.

1 Like

Hi again all; one further question, if that’s okay?

It’s been a little while, but I’ve now got my 10Tb pool all upgrading/re-silvered and happy. I’ve got incus stable (6.3) installed from @stgraber’s jabbly sources, and I was all ready to to lxd-to-incus, but I thought I’d do the incus info and lxc info and see what’s different.

The server is 22.04 + HWE kernel (which gets zfs 2.2 kernel module).

In incus info I see:

  - name: zfs
    version: 2.1.5-1ubuntu6~22.04.4
    remote: false

and with lxc info I get:

  - name: zfs
    version: 2.2.0-0ubuntu1~23.10.3
    remote: false

and zfs version:

# zfs version
zfs-2.1.5-1ubuntu6~22.04.4
zfs-kmod-2.2.0-0ubuntu1~23.10.3
# uname -a
Linux bigstore 6.5.0-45-generic #45~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Jul 15 16:40:02 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

So, I being a little paranoid, and due to me running into a problem with shiftfs, idmaps which was solved by using ZFS 2.2, I’m a bit concerned about what I may break in the process of moving to incus.

I have two volumes that are shared between two containers which is why I ran into the problem, and it looks like incus will be using 2.1.5 of ZFS which I don’t think contains the idmap aware stuff that solved my previous “shiftfs-disappeared” problem.

The final resting place for this server is Debian 12 (bookworm) which does have ZFS 2.2 in debian backports. The previous advice (which may still hold) was to upgrade to incus on 22.04, do the backups and then switch of Debian (which will be different physical root/boot disk(s)).

Should I (on Ubuntu):

  • Use the zabbly sources for ZFS + Incus to get ZFS 2.2 + Incus 6.3?
  • OR: be okay just using the zabbly sources for Incuse 6.3, leaving ZFS at 2.1.5.

And then on Debian:

  • Use zabbly for Incus and debian-backports for ZFS
  • OR: Use zabbly of Incus, ZFS and kernel?

I’m most concerned about a) drifting too far from bookworm in terms of kernel + ZFS over time. Should I use the 6.0 LTS instead? I’m not sure of the trade-offs. This server needs to effectively be bullet-proof, stable and not need too much attention to maintain it!?

Thanks again for any advice that you can offer.

It basically shows that your ZFS tools (zfsutils-linux) is version 2.1.5 but the kernel module version is 2.2.0. For the sake of idmapped mounts, only the kernel side matters so that will be fine.

For something bullet proof and that doesn’t change too much, then you’d probably be better off using Incus 6.0 LTS so you don’t have to deal with the monthly version jump. That’s unless you need something that came out since the LTS, most notably the application container (OCI) support.

Otherwise, your best bet is to just use the Zabbly repo for 6.0 LTS Incus and then use the rest from Debian directly, making sure to take ZFS from backports so you get 2.2.4 that way.

Thanks very much for the advice @stgraber - much appreciated. I will proceed on that basis. I (will) have backups of all the containers/volumes. I’ll post back here when I’ve got it all up and running again!