Questions about recovering an old Incus ZFS pool to a new Incus installation

There is an old Incus installation (Zabbly stable) with many instances and I want to move it to a new Linux installation. I attached the old disk to the new system. The old Incus ZFS pool works fine with Incus if I boot the old disk.

My first option was to take a ZFS snapshot, then zfs send | zfs receive so that the old Incus ZFS pool gets moved to the new and bigger disk. And then I would perform incus admin recover. This did not work due to an error on a specific container, which I could not resolve.

cannot receive: local origin for clone default/incus/containers/openai@migration does not exist

The next option was to incus admin recover the old ZFS pool into the new Incus installation, then add the new ZFS pool, and finally move instances from old to new. This did not work either, with the following error, missing backup.yaml for an instance.

$ incus admin recover 
This server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (truenas, zfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /dev/sda4
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: no
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="zfs", source="/dev/sda4")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
Error: Failed validation request: Failed checking volumes on pool "default": Failed parsing backup file "/var/lib/incus/storage-pools/default/containers/mynewinstance/backup.yaml": open /var/lib/incus/storage-pools/default/containers/mynewinstance/backup.yaml: no such file or directory
$

I can boot the old disk with the old Incus installation, if needed.

  1. Should zfs send | zfs receive be an adequate option to move an Incus ZFS pool to a new location?
  2. Since I have access to the old Incus installation, what files should I take with me so that incus admin recover on the new system would work properly?

Hello Simos,

you mentioned that you can boot from the old disk. Have you tries to start the two instances in question? Just to confirm they are not reporting any errors on the old installation.

Moving the pool / dataset zfs send | zfs receive is for sure a possible solution as long as you have the same Incus version installed on both systems and you place / mount it into the same folder. You also need to have the same network devices etc. configured. Done it a few times during my testing to make sure there is a backdoor. However, I’m not sure if you might face the same issues trying to start the instances?

An other option is to try to recreate the missing backup.yaml file. Just mount the instance dataset and copy a known working backup.yaml from a different instance and adjust the settings to the current instance. Not pretty but might do the job.

Or you initialize the instance on the new Incus system, mount both new and old instance and copy over the backup.yaml from new to old. Delete new instance and try incus admin recover again

Hope this give you some ideas

1 Like

You’d want to make sure to start, then stop that instance to try to force the backup file to get re-generated.

I booted the old disk and started/stopped each instance. While the instances were running, I could see that there was a backup.yaml file under /var/lib/incus/…. But when I would stop the instance, the directory would become empty (probably due to unmounting).

Now I can incus admin recover on the new system and it finds all instances.

$ incus admin recover
This server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (zfs, truenas, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /dev/sda4
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="zfs", source="/dev/sda4")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown storage pools have been found:
 - Storage pool "default" of type "zfs"
The following unknown volumes have been found:
...
>>> all instances are found <<< 
...
You are currently missing the following:
 - Network "incusbr0" in project "default"
 - Profile "gui" in project "default"
 - Profile "x11" in project "default"
 - Network "incusbr1" in project "default"
 - Profile "windows" in project "default"
 - Network "lxdbr0" in project "default"
Please create those missing entries and then hit ENTER: 

I then followed the instructions to create the shown missing entries before I hit ENTER.

$ incus network create incusbr0 ipv4.address=10.10.10.1/24
Network incusbr0 created
$ incus network create incusbr1 ipv4.address=10.20.30.1/24
Network incusbr1 created
$ incus network create lxdbr0
Network lxdbr0 created
$

Then I created the three profiles. I got generic profiles from online and some instances where missing the filenames of the device. The recovery would fail, so I could edit the profile (remove the offending device), then try incus admin recover again. It’s good that the recovery will refuse to continue if something critical is not fixed. But you have the option to try again and again until you get it right.

Then, to start the containers and have them get networking from the managed network (incusbr0), I had to add that network to the default profile. I omitted adding explicitly the default storage pool in the default profile, but it worked and I could start instances.

The next step is to move the containers to the new storage pool.

2 Likes