No pool available after reboot

My setup:
openstack instance - 8 cores, 30Gigs Ram, / - 50Gig SSD - sda1
LXD - 2.17 - main xfs pool named lxd-pool - 300Gig from attached volume (CEPH replicated 3 times) - sdb1
Everything is running perfectly.

So I added another volume named lxd-backup - 100Gig from attached volume (CEPH replicated 3 times) - sdc1
Everything is running perfectly.

When if I reboot the instance, containers fail to reboot No pool available

zpool import -> I found that lxd-backup is now linked to sdb1 and main lxd-pool linked to sdc1 (mismatch).

Then I’ve detatched sdc1 and rebooted the instance and I recovered all my containers.

So now, I found that this could may be caused by an openstack bug. Not sure.

What should I do to avoid this specific ZFS problem? Is there a way to avoid this at the instance level or may be there is one or more commands I could make to change the pools configs.


In general you shouldn’t expect the /dev/sdX ordering to be stable. It depends on kernel detection order which is likely what’s causing you trouble here.

There may be a /dev/disk/by-XXX/ type entry which will be guaranteed stable based on some identifier from the virtual disk.

Thank you Stéphane, I understand.