LXD to Incus host-to-host migration caveats and working practices

The purpose of this topic is to document known issues, workarounds and working practices for LXD to Incus migration in a multi-host environment with a lxc move strategy. lxd-to-incus in-place migration strategy is already well documented elsewhere, although I include some observations for that here as well. I have updated this first comment along the way, and now the migration was completed with no further updates expected. Feel free to discuss, comment or ask questions below.

Environment

  • Multiple servers globally, all dedicated hosts for containers and VM’s in LXD / Incus
  • Old hosts running LXD 5.0.3 snap on Ubuntu 22.04 LTS ‘jammy’
  • New hosts running Incus 6.0.0 deb from ubuntu repository on Ubuntu 24.04 LTS ‘noble’
  • The environment is running during the migration, downtime is limited to individual containers’ move operations
  • An orchestration node and orchestration written in Bash in a container running on one of the hosts is used to manage the whole migration
  • Orchestration is done using lxc CLI originally, incus CLI is introduced in parallel

Known incompatibilities and workarounds

  • CLI config is not shared
    • There are effectively three approaches possible, each with their own split-brain issues:
      • to migrate the config from LXD to Incus and keep LXD as master until it is time to retire LXD support, rerunning the migration for every config change (rm -rf ~/.config/incus/; cp -a ~/snap/lxd/common/config/ ~/.config/incus/)
      • as above, but only once, and rely only on incus commands for new hosts
      • no migration, running any required actions such as adding remotes on incus separately
    • There seem to be incompatibilities between configs; in particular the first approach where incus hosts are added to lxc, then config migrated to incus, will not be recognized as private remotes, and incus will fail with “Error: The remote isn't a private server
      • in ~/.config/incus/config.yml, changing the protocol to incus for incus hosts seems to fix it. Simply doing a sed -i 's/lxd/incus/' ~/.config/incus/config.yml with the config migration works well if used only for incus anyway.
  • Adding LXD remotes to Incus and vice versa does not work using the CLI alone
    • core.trust_password does not exist in Incus
    • tokens are not compatible
    • config trust syntax has changed
    • the pragmatic approach is to add remotes using the certificate copy method
  • file pull protocol has changed and will not work
    • any commands like this need to be run using the same CLI as the host
    • the pragmatic approach is to add a variable that contains the executable name (lxc or incus) that can be used in scripts on a per-host basis (source hostparams; $lxdtype file pull ...)

lxd-to-incus migration notes

While this does not affect the current strategy, it is a valuable fallback or alternative for potential hosts or containers that cannot yet be migrated. These notes may become a topic of their own if the list grows.

  • lxd-to-incus version 6.0.2 and 6.6 are identical. lxd-to-incus documentation mentions to use “latest stable” but LTS from Zabbly repository should thus work equally well. This is important if move actions are then expected from 6.0.2 to 6.0.0 (Ubuntu 22.04 LTS + Zabbly LTS to Ubuntu 24.04 repo versions respectively) to avoid backwards migrations.
  • When you migrate with lxd-to-incus, the existing private bridge (by default, lxdbr0) remains in the migrated setup. This makes sense as a way to cover instances that specifically have lxdbr0 in their configuration. When cleaning up, you may try renaming the interface to incusbr0 as a way to wean out any network configuration that uses the hard-coded interface name.
  • When you migrate with lxd-to-incus, the dnsmasq process that takes care of the managed network now configures the instance names to have suffix .incus (instead of the old .lxd). That is, before: mycontainer.lxd, now mycontainer.incus. This could cause trouble if your instances are configured to use those hostnames with suffices. There’s a workaround to keep the old suffix until you fix the configuration files.
1 Like

You may add to this the following

  1. when you migrate with lxd-to-incus, the existing private bridge (by default, lxdbr0) remains in the migrated setup. This makes sense as a way to cover instances that specifically have lxdbr0 in their configuration. When cleaning up, you may try renaming the interface to incusbr0 as a way to wean out any network configuration that uses the hard-coded interface name.
  2. when you migrate with lxd-to-incus, the dnsmasq process that takes care of the managed network now configures the instance names to have suffix .incus (instead of the old .lxd). That is, before: mycontainer.lxd, now mycontainer.incus. This could cause trouble if your instances are configured to use those hostnames with suffices. There’s a workaround to keep the old suffix until you fix the configuration files.

Thanks, added. The heading may grow to a topic of its own if we want to collect lxd-to-incus migration notes more comprehensively.

This thread gives a multi-step process for renaming the bridge successfully: Renaming `lxdbr0` to `incusbr0` after migration

2 Likes

Migration with this approach was completed and worked well (with the mentioned caveats of course). I used approach of lxd to incus config copy for each change, then switched to pure incus commands when no more lxd hosts were present.