If you have your instances in a zfs dataset, then it should be possible to fully recover them.
I have done this following approach several times so I know it CAN work. I don’t know the official way to do it, so hopefully an expert can advise BUT, here’s what I do. (Note you must have an intact zpool of your incus instances - if not, please don’t even consider this):-
DO NOT MAKE CHANGES TO YOUR ZPOOL DIRECTLY. Assuming it does/did work, then the following for me has been solid:
Export your zpool #Probably not essential, but I like to be sure it’s not going to get affected
apt remove --purge incus - just get rid of it.
apt autoremove #This is to clear it all out - I found this is ESSENTIAL
rm -r /val/lib{incus-directories} - anything to do with incus, just delete it all.
Consider a reboot, especially if the setup was somehow corrupted, just to make sure you start with a fresh OS that has nothing left of the old incus still stuck in a dead process or whatever (probably overkill).
apt install incus
zpool import (pool)
incus admin-init #We do a PARTIAL INIT here:
do NOT init storage (select ‘n’ for storage)
But do setup incusbr0 to your liking until the init finishes.
Do not launch an instance - you have no storage yet, so it won’t work. your default profile will look something like this:-
incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
name: default
used_by:
project: default
If you have more than a default profile in your old setup, you can either manually re-create them (if you remember them), or what I do is just ‘incus profile copy default {new-profile}’ just so they exist - they don’t have to be correct yet. You have to recreate these for the recovery to work. The good news is it will error if there’s a profile missing - just create one (incus profile copy default ) and then you can retry the recover command, below f this fails first time:
incus admin recover
Point it to the zpool/dataset and instruct it to recover (follow prompts). Make sure you call the storage pool the same as before (probably ‘default’?) - that’s also important.
…And for me, it imports everything. But your’re not done yet.
The profiles are all broken, so you need to add the default storage back: add something like this to it:
root:
path: /
pool: default
type: disk
So your default will look something like:
incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:
- /1.0/instances/{instance names}
project: default
Also, if you did have other profiles, they too now need to be fixed else the instances will not work properly, i.e. storage (definitely) at least, but however you did them before. But otherwise, it should all work - list, start, stop instances.
The is NO WARRANTY on this method. It’s just what I now do after e.g. a full OS upgrade - I just export my zpool before I flash a new OS, then I run incus admin recover after I reinstall and do a basic init, then a recovery, then profile fix. It’s much faster than me copying giant instances from my backup servers. It’s never failed me so far - touch wood. Everything else I have tried always seems to error out and leaves me in no mans land. So I always do this now. I am not saying you should! LOL
Let me know, however, if you do try this. So far, it’s been solid for me. But it’ll be interesting to hear what the pro’s say about this admittedly blunt-force approach. 
Good Luck!
Andrew