Linstor backend and linstor driver on LTS release

I’m running 3 nodes cluster, using LTS release, currently backed by local ZFS on each node, total workload arround 200 containers, and I’m rather desperate to be able to use a some kind distributed storage. Ceph was out of question for my existing hardware. Linstor comes to mind and sounds promising.

Is there any plan to backport linstor driver into 6.0 LTS release?

and, If I choose to use stable release on production cluster, can someone please tell me what’s the cons compared to using LTS release?

and, what’s the plan and timing on Incus 7.0 LTS release?

No plan to backport such a large new feature to the current LTS.
Incus 7.0 LTS is due out in late March or early April, so it’s not a super long wait before that comes out.

is it ok if I run Incus on top of linstor, on top of ZFS pools/datasets? what’s the pros/cons?

if so, would volume transfer utilize optimized transfer of ZFS ?

I dont have practical experience with linstor and incus yet. But have some experience with LXD (incus forked) on ZFS on DRBD (base for linstor). In this setup incremental copy definitely uses ZFS snapshots to improve the copy process. For the current data, a final rsync is run. If you do not constantly take snapshots, that could in principle be sped up using ZFS bookmark features, but i think neither incus nor lxd implemented this.
The main reason we used zfs on top is the snapshots and the integrity. Its nice to copy/sync using lxd/zfs, but that is partly redundant with DRBD. Because ZFS is on top we dont/cant use it for local redundancy though (e.g. if we had a zfs mirror on top of drbd devices, we would double the bandwidth need).