"cluster restore" command does not place instances to the node with name of previously evacuated node

I need to rebuild nodes in my incus cluster after one node disaster. All instances, volumes and images are located on shared ceph and cephfs storages.

I use the workflow as below:

  • Evacuate node with the cluster evacuate <node> command (all instance profiles configured with cluster.evacuate=move)
  • Remove node from the cluster with the cluster remove <node> command
  • Rebuild host OS
  • Add node to the cluster with the same name using cluster add <node>; admin cluster init commands
  • Try to revert moved nodes back to the node with the cluster restore <node> command.

Everything goes smooth except placing instances back to the node. Restore process did not find any orphan instances (as I assumed) and did not move instances back to the re-added node.
All evacuated instances have configuration option volatile.evacuate.origin=<node>.
Of course, I can script the restoring instances placement using volatile.evacuate.origin

The incus documentation do not cover such case. So it’s not clear if such case is supported, or I missed something :slight_smile: