What is the best way (best practice) to migrate LXD containers to new hard drive

Greetings.

I am migrating to a new, bigger SSD and am curious as to the best way to migrate my LXD to the new hard drive. I know the easiest choice is to do clone the old HD but I do not want to migrate all the data on the old HD to the new HD so I would like to do it piece meal.

My lxd instance has 10 containers running, 9 of which belong to the default profile which utilizes the lxdbr0 and a local zfs pool (local meaning its on the old HD). One container utilizes a much larger zfs pool that resides on an old, slow external drive.

So I would like to install a new OS (still ubuntu based) on the new SSD and then migrate the LXD over. What is the best way to do this?

Additionally, is there a way to keep LXD separate in case I need to do this again? For instance, on a separate partition like is often suggested for /home folders?

Thank you!

EDIT: For anyone who has gotten here searching for an answer to a question similar to my initial question please read this entire thread. My initial understanding brought about some initial disaster and took me many hours to solve. Read the entire thread and then decide if its the path you wish to take. Thank you!

Your best bet is to backup the LXD data at /var/snap/lxd/common (or /var/lib/lxd if not using the snap) and make sure you restore it on the new system.

Note that you don’t need any of the container data so long as the old zpool drives are still available. LXD will then start back up with your containers backed by your old disks.
You can then create a new storage pool on the SSD and move your containers between storage pools, eventually deleting those you don’t need anymore.

Another more generic, possibly easier alternative would be to export your containers using lxc export NAME --optimized which will get you a tarball for your container containing the raw ZFS data needed to re-import it on another zfs backed LXD.

You can then do a clean install of Ubuntu and LXD and use lxc import to re-import the containers from the backup tarballs.

Note that this approach will not preserve any configuration you have for networks, profiles or images, it really only exports the containers themselves.

Stéphane:

Thank you for your thoughtful responses! So if I understand you correctly, say I do a clean install of Ubuntu 19.04 on my new SSD. If I copy the /var/snap/lxd/common/ folder over to the new SSD (/var/snap/lxd/common/) when I spin up LXD I should have the containers from my prior install (old SSD)?

Thank you!

So long as the old drive is connected and the zpools visible, yeah, that should work.

Stéphane:

First and foremost, thank you. I hope you are having a great holiday season.

Second, I’m afraid I have run into a problem. I did as discussed and executed the following:

  1. new install of LXD on new SSD w/ new OS
  2. stopped LXD (snap stop LXD),
  3. deleted /var/snap/lxd/common/lxd
  4. copied over the old /var/snap/lxd/common/lxd to /var/snap/lxd/common/lxd
  5. issued ‘snap start LXD’

Unfortunately, this is where it goes wrong. When I attempt to start LXD, I initially got the following:

error: cannot perform the following tasks:

  • start of [lxd.activate lxd.daemon] (# systemctl start snap.lxd.activate.service snap.lxd.daemon.service
    Job for snap.lxd.activate.service failed because the control process exited with error code.
    See “systemctl status snap.lxd.activate.service” and “journalctl -xe” for details.
    )
  • start of [lxd.activate lxd.daemon] (exit status 1)

journalctl -xe produced:

The unit snap.lxd.activate.service has entered the ‘failed’ state with result ‘exit-code’.
Dec 25 22:20:56 KubuntuOptiPlex3040 systemd[1]: Failed to start Service for snap application lxd.activate.
Subject: A start job for unit snap.lxd.activate.service has failed
Defined-By: systemd
Support: http://www.ubuntu.com/support

A start job for unit snap.lxd.activate.service has finished with a failure.

The job identifier is 4850 and the job result is failed.

Interestingly enough, about an hour later I was able to ‘snap start lxd’ successfully but then when I attempted to execute any lxc command I was presented with:

Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied

w/ sudo

Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused

I am not sure what I am doing[wrong].

All I want to do is restore my prior lxc containers. 9/10 are tied to the default storage pool which is in the aforementioned directory. Maybe its in how I am copying the old directory to the new?

Of note, my var/snap directory is located on a separate partition via a bind command in the fstab file. (if that makes a difference)

Thank you!

EDIT: In addition to the complete directory of the old LXD instance, I have several backups made via the lxc export command.

So I went back to basics and it seemed to solve the problem. Since I had a complete backup of the snap/lxd directory (as described here: https://lxd.readthedocs.io/en/stable-3.0/backup/) I decided to restore the old lxd instance via this method. As my prior post had indicated, this did not meet with success and threw up socket errors. After hours of trying to figure a way around it, I figured “what happens if I copy the existing directory over to the new location and then install LXD via snap?” So this is what I did and viola! Everything works as expected.

I don’t know if my use-case is different than what is contemplated in the above-referenced article, but for me the key was to install on-top of the old directory.

Thank you all for your help!