Questions on how to prepare LXD cluster migrate from deb to snap 3.0.3 -> 4.0 with mounted storage pools

Hello, I read some fail stories with lxd migrate I want to be prepared so thanks in advance. I tried lxd.migrate on non cluster and it worked fine.

I have 4 machines under ubuntu18 in cluster with ceph, hdd, ssd storage pools. Storage pools are mounted via fstab something like this

UUID=982f9701-ec36-4e34-9849-e30e029a6630 /var/lib/lxd/storage-pools/ssd btrfs noatime,subvol=@ssd-lxd

While root partition is on separate disks like this

UUID=0c4c524f-c0bc-43ee-98ae-fb79b05d317c / btrfs noatime,subvol=@

Question is how to minimize downtime and prepare mounts for snap. Should I create folders and add extra mount point in snap folder like this?

UUID=982f9701-ec36-4e34-9849-e30e029a6630 /var/snap/lxd/common/lxd/storage-pools/ssd btrfs noatime,subvol=@ssd-lxd

Will lxd.migrate be okay with this since basically no container movement will be needed? Maybe I should do something entirely different?

And in worst case scenario is having backup of /var/lib/lxd files enough to put cluster back in running mode if something goes wrong? I mean wont other cluster members freak out if lxd versions start changing will I be able to bring it online.

Other tips?

I’m entirely sure about switching mount points for the pools, probably @stgraber will have a better grasp on that.

Regarding the backup of /var/lib/lxd, yes I think that will work as long as you do these steps in sequence:

  1. Shutdown the containers and the lxd daemons on all 4 nodes
  2. Perform a backup of all 4 /var/lib/lxd directories
  3. Run the migration individually one node at a time, possibly with extra care about mount points
  4. Restart the lxd daemon on all 4 nodes

If anything goes wrong, restoring the original /var/lib/lxd all all your nodes should bring your cluster back.

lxd.migrate will move the mount for you, this however will not survive a reboot so you’ll need to update your fstab so that after reboot things are mounted where they should.

When upgrading a cluster, you need to run lxd.migrate on every one of the nodes before any of them can complete. That’s because LXD will not fully start and complete the upgrade until they are all on the same version.

For backups, LXD’s own upgrade mechanism will make a backup of the database so it’s always possible to revert back to the deb, though certainly a bit frustrating to manually move everything back.

Sounds like you’re using btrfs rather than zfs, so reverting is actually slightly easier for you as you wouldn’t need to also go and modify all the dataset mountpoints back to their old path.

One step you could take ahead of time to make it a more standard/supportable setup is get rid of that /etc/fstab trick (or at least the part that directly affects LXD).

What you can do is create mountpoint at say /srv/lxd/ssd, have your /etc/fstab entry setup for that path rather than /var/lib/lxd/…, mount it, confirm that /srv/lxd/ssd shows what you’d expect, then tell LXD to use that as the source for the pool.

This will then avoid any external mounts on /var/snap/lxd/… and LXD itself will take care of mounting /srv/lxd/ssd over /var/snap/lxd/common/lxd/…

If you’d like to do that, you should create those mountpoints on all machines, mount them, confirm the data is properly visible in there, then use:

  • lxd sql global “UPDATE storage_pools_config SET value=’/srv/lxd/ssd’ WHERE value=’/var/lib/lxd/storage-pools/ssd`;”

And confirm that the source paths all look good in lxd sql global "SELECT * FROM storage_pools_config WHERE key='source';

Then restart lxd on all systems to confirm that everything behaves on startup.
If that’s the case, then you should be in good shape for lxd.migrate and won’t need to do anything extra for the mounts.

At that point, I’d recommend you do:

  • Backup of /var/lib/lxd (minus the storage pool data)
  • lxd.migrate on first system, look for any sign of trouble, it should successfully detect source and destination, stop source, move data to destination, then hang when starting destination
  • journalctl -u snap.lxd.daemon to confirm that everything looks good and that it’s just waiting for the other systems to be upgraded too
  • lxd.migrate on the other systems

Thanks for extensive guides. I will try sometime soon.

So I tried to upgrade and it went wrong. In the end I got everything running, had to migrate some machines but cluster needs repairs and reaasembly.

For archival purposes I write down what went wrong and how I fixed it even if it is not current problem.

When lxd.migrate is executed dont press yes, before you execute it on every machine, it will fail on last few, because cluster is down, i guess duh? But I didint think about it.

Two cluster members migrated successfully containers were up and running.

  1. cluster member failed to migrate. It was up on lxc cluster list. But lxc commands gave errors and since it had only 1 container, I tried just reinstall it, but it failed too with something like dqlite no leader what I noticed that it seems it tried to contact itself. I specified correct ip addresses, and all lxd init went as usual but at the end I saw leader related errors. Could it be that after forced cluster member removal some entries left it db and now I cant readd same member(with same cluster name and ip address)?

  2. second cluster member had bizarre problem even before migration which I didint notice.
    This is part of containers backup.yaml from old backup image in lxd 3.0.3

pool:
config:
size: 54GB
source: /var/lib/lxd/disks/ssd.img
description: “”
name: ssd
driver: btrfs
used_by:
status: Created

All containers were properly stored where they should be on normal filesystem, but config had these values and after import all containers of this member broke and didint start. I tried to mount /var/lib/lxd/disks/ssd.img and it was almost empty, had just container names and backup.yaml in them. No idea how this happened or why everything even worked before.

And then suffering started :grin: I tried various ways to remove this ssd (and hdd.img).

  1. Tried with lxd sql update rename this pool to proper /srv/lxd/ssd it didint work.
  2. Tried deleting container then reimporting, but lxd.import /srv/lxd/ssd didint detect as a pool.
  3. Then created borkssd pool and tried to import from there, got errors that container is on multiple pools.

After all that I gave up ( it was 22h + at that point) so I removed that completely borked instance reinstalled it as non clustered one and migrated all containers via btrfs send and lxd.import, at least that worked :slight_smile: So after few 3hours (thanks god for fast ssds, only few containers on hdd and recent resources upgrades so everything fit) I finished.

In the end in the morning nobody noticed my suffering. And life goes on :upside_down_face:

Now I will try

  1. reinstall member with borked ssd.img ( I hope I will get around “no dqlite leader” problem.)
  2. migrate vms from non clustered member
  3. readd that non clustered member back to cluster
  4. balance resources
  5. finally enjoy lxd qemu support and start migrating windows vms :slight_smile:

I hope read was not too lengthy.

Forgot to mention at first I failed to create new pool bork_ssd due mounts or something inside cant remember, and now it shows like this

lxc storage show bork_ssd
config: {}
description: ""
name: bork_ssd
driver: btrfs
used_by: []
status: Errored

and when trying to delete

lxc storage delete -v bork_ssd
Error: Error loading pool "bork_ssd": No such object

I guess I need to use lxd sql for that too?

Edit: I just deleted it with sql and readded cluster member which was with btrfs file images.

Yeah you’ll need some manual recovery to get rid of a pool in the Errored state. Deleting it using lxd sql would be the first step, the second one is to make sure that it’s actually physically delete from the various nodes (since some might have succeeded creating it and some other not).

We’re aware that this is sub-optimal and we’ll need to come up with a better story/experience for this case.

I still cant rejoin my last machine. What I saw with lxd sql that it exists under nodes, I deleted it from there with sql command and tried rejoin, but it still fails
| 9 | cluster-linux | | 10.0.199.230:8443 | 30 | 186 | 2020-07-21T08:39:42+03:00 | 1 | 2 |

t=2020-07-21T08:39:40+0300 lvl=info msg="Update network address" 
t=2020-07-21T08:39:40+0300 lvl=info msg=" - binding TCP socket" socket=10.0.199.230:8443
t=2020-07-21T08:39:42+0300 lvl=eror msg="Failed to get leader node address: Node is not clustered" 
t=2020-07-21T08:39:42+0300 lvl=info msg="Stop database gateway" 
t=2020-07-21T08:39:43+0300 lvl=info msg="Joining dqlite raft cluster" address=10.0.199.230:8443 id=9 role=stand-by

And it gets stuck like this.

edit: I found in certificate table with field “10.0.199.230” deleted it too but it didint help either.

You might need to run lxd cluster remove-raft-node as well (note that it’s lxd not lxc).

OK I promise I looked at wiki, somehow I managed to miss that… Instead I resorted to lxd sql …

Thanks for help.