LXD lost ZFS pool after host migration (and wont config)


Experiencing some troubles with LXD daemon: it refusing to start (and config) after host system migration.
Some history:

  • Host installed as Root-on-ZFS, with separate hostname-zroot\lxd dataset for LXD only.
  • Initially LXD configured to use this dataset, inside host zpool
  • Host migrated from two zpools (hostname-zroot and hostname-srv) to one, named hostname
  • Then, LXD daemon wont nor start nor configure

This receipt not working, because LXD team updated databases location and format.

Workaround with create temporary pool, destroy all snaps and containers, destroy pool, move dataset, create brand new one for LXD, make send/receive and try to recover lost containers are unacceptable. Too much dances for simple sed task.

My data are safe and loud. All I need, is just erase 6 chars in config.

I responded in the github issue with both how this should normally be handled (setup both pools in LXD, move the containers through LXD, remove old pool) and how to patch the database with current LXD.

1 Like

Thanks a lot!

This post created for a solution, and your one help!

But a gitlab issue more wide, and covers situations, when lxd leave user alone with some kind of uneditable config. It is a real trouble, and I explained why, in new comment.

Holy random, this happens when I have wide internet access channel, and can use search engines or just ask for help.