Dual-booting Host, same LXD. Possible?

Dual-booting a host “temporarily” (it will take months until I can get rid of the old system for the new one). Both systems have access to the ZFS dataset full of LXD containers. Is it possible to configure LXD on both systems in such a way that no matter whether I boot into the old system or the new system, the containers and the virtual network are all the same?

The bare metal is a new machine with M.2 SSD. The SATA SSD was pulled from the old metal that died on me too early. I can dual-boot into M.2 or SATA. SATA is Xubuntu 18.04.3 (started life as 14.04). M.2 is Ubuntu Server 18.04.3 (fresh). I need access to the LXD containers from both systems. So far I have found a lot of documentation on how to migrate LXD containers from one host to another, not on how to access the same containers from two hosts dual booting on the same bare metal, which is probably a weird / transient scenario. If I did not accumulate so much cruft on my SATA SSD over the years, I could probably do the migration during one weekend…

Hi!

LXD keeps the configuration in /var/lib/lxd (for the deb package) and /var/snap/lxd/ (for the snap package).
I suppose you could configure both hosts to use the same path for the LXD configuration, from some common disk. Then, the ZFS dataset remains the same.
You could create a separate partition on one of the disks for /var/lib/lxd or /var/snap/lxd, and have it mounted on both hosts.
If you manage to keep both LXD installations at the same version, then it should be OK.

Thanks for trying to help. If only it was that easy. LXD is indeed in /var/snap/lxd on the old SATA SSD. On the new system, I have an fstab entry to mount it on /old. I did

mv /var/snap/lxd /var/snaplxd_disabled
ln -s /old/var/snap/lxd /var/snap/
ls -l /var/snap

lrwxrwxrwx 1 root root 17 Aug 12 13:54 lxd -> /old/var/snap/lxd

and yet after rebooting to the new system, the containers are not there. In the old system, they are still sailing nicely along…

/var/snap/ is managed by snapd. You need some extra care with mount points.
But most importantly, you need some reproducible setup that can be used to confirm that the whole process is doable.

The following is a proof-of-concept.

Let’s go.

On the host do the following,

mkdir ~/VARSNAPLXDCOMMON
chmod 777 ~/VARSNAPLXDCOMMON
lxc launch ubuntu:18.04 mylxd1 -c security.nesting=true
lxc launch ubuntu:18.04 mylxd2 -c security.nesting=true
lxc config device add mylxd1 mylxddirectory disk source=/home/user/VARSNAPLXDCOMMON/ path=/var/snap/lxd/common/

On mylxd1 perform the following.

lxc ubuntu mylxd1
mylxd1> sudo apt remove lxd
mylxd1> sudo apt autoremove
mylxd1> sudo snap install lxd
mylxd1> sudo lxd init
...
mylxd1> lxc launch images:alpine/3.6 mycontainer
mylxd1> logout

On mylxd2 perform the following.

lxc ubuntu mylxd2
mylxd1> sudo apt remove lxd
mylxd1> sudo apt autoremove
mylxd1> sudo snap install lxd
mylxd1> sudo lxd init
...
mylxd1> logout

Now, stop mylxd1 so that we perform the attachment to mylxd2.

lxc stop mylxd1
lxc config device add mylxd2 mylxddirectory disk source=/home/user/VARSNAPLXDCOMMON/ path=/var/snap/lxd/common/
lxc start mylxd2

Then, get into mylxd2 to reap the fruits of the shared LXD installation.

$ lxc ubuntu mylxd2
+ exec /bin/login -p -f ubuntu
Last login: Mon Aug 12 21:46:12 UTC 2019 on UNKNOWN
ubuntu@mylxd2:~$ lxc list
To start your first container, try: lxc launch ubuntu:18.04

+-------------+---------+------+------+------------+-----------+
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------+---------+------+------+------------+-----------+
| mycontainer | STOPPED |      |      | PERSISTENT | 0         |
+-------------+---------+------+------+------------+-----------+
ubuntu@mylxd2:~$ lxc list
+-------------+---------+----------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |         IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-------------+---------+----------------------+-----------------------------------------------+------------+-----------+
| mycontainer | RUNNING | 10.219.149.162 (eth0) | fd42:8200:a6e8:c3e4:216:3eff:fe0e:562b (eth0) | PERSISTENT | 0         |
+-------------+---------+----------------------+-----------------------------------------------+------------+-----------+
ubuntu@mylxd2:~$ logout
$ 

Looks cool. Now let’s stop mylxd2, start mylxd1 and do a lxc list in there.

lxc stop mylxd2
lxc start mylxd1
lxc ubuntu mylxd1
ubuntu@mylxd1:~$ lxc list
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| mycontainer | RUNNING | 10.219.149.162 (eth0) | fd42:8200:a6e8:c3e4:216:3eff:fe0e:562b (eth0) | PERSISTENT | 0         |
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
ubuntu@mylxd1:~$ 

The container shows up back in mylxd1.

Note that this is proof-of-concept. Most likely LXD was not tested to work like this.
There should be some tightening of the permissions of the shared directory.
Good luck!

Like the OP, I multiboot into different Linux OSes. This is useful for many reasons including redundancy, ease of repair and the ability to try out developmental versions on bare metal. In my case, I purposefully want the same LXD config in all of them, but it’s hard to find implementation guides online. This thread is the closest one I’ve found.

I finally solved the puzzle and am reviving this thread for posterity if nothing else.

The OP almost had the solution, but was tripped up by the use of soft symlinks. The key is in the second post in this thread by simos. If each installed OS can be fooled into thinking that the /var/snap/lxd directory is native to itself, then LXD will launch without a hitch sharing the same ZFS pool.

The solution is to use bind mounts rather than symlinks. Let’s say we have the original OS on partition 1 called “Obiwan”. The newer OS is on partition 2 called “Luke”:

  1. In preparation, let’s rename Luke’s LXD configuration. This also moves the old LXD out of the way. From Obiwan, mount Luke. Then:

sudo mv path/to/luke/var/snaps/lxd path/to/luke/var/snaps/lxd_bak

  1. In Luke we need to mount Obiwan at launch. So, in fstab, add the following:

/dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/obiwan ext4 errors=remount-ro 0 1

  1. Then add the bind mount:

/mnt/obiwan/var/snap/lxd /var/snap/lxd none bind 0 0

That’s it. It’s that simple. Reboot and the bind mount will bind the LXD directory in Obiwan to Luke. The LXD containers will automagically load and all containers that were available in Obiwan are now available in Luke. If containers are created or modified in either OS, they will automagically show up in the other.

I don’t know why symlinks don’t work. I’m guessing that it may be due to soft symlinks not being actual directories, whereas a bind mount acts like a hard link, which is enough to fool snap and LXD into thinking that they are dealing with a natively resident directory.

The only fly in the ointment is ZFS. If you created your zpool first, outside of LXD like I do, then you will always need to go back to Obiwan to maintain/adjust the zpool, since the pool is associated with Obiwan’s ID. Of course, you can export it from Obiwan, then import it with Luke, but I haven’t experimented with that because I don’t want to mess up the good thing that I have going right now.

A word of caution though: this setup does make Luke more fragile. If the mount points change or break, fstab will choke and Luke’s bootup process will grind to a halt. fstab is called very early and the system won’t know what to do with broken mounts.