Use a single LXD container on multiple hosts (not concurrently)

(Evgeny Pokhilko) #1

Use a single LXD container on multiple hosts

There are different use cases in which you want to share one container between
multiple hosts. I am not talking about simultaneous access. I stop the container
before opening it on another host. I have a dual boot laptop with two Linux
systems. I also would like to carry a container on a USB stick. In other words,
the hosts don’t have network access to each other but they can share disk space.

I suppose the obvious way of doing it is to export an image and import it on the
other system but that takes too much time and it is easy to forget and overwrite

On my multiboot laptop I configured a btrfs subvolume that I mount under
/var/lib/lxd/storage-pools/ on both systems. I ran "lxd import ".

It seems to work but I have a few concerns:

  1. In the documentation it describes “lxd import” as:

This recovery mechanism is mostly meant for emergency recoveries…

  1. The document does not say that you need to place your imported container
    storage pool in /var/lib/lxd/storage-pools. I found it in a reply that Stephane
    Graber made in a conversation with somebody else. It was after multiple attempts
    to call lxd import in other directories.

  2. Calling the deamon lxd instead of lxc is unusual because all other storage
    manipulation commands are done with lxc.

Does the container have any host specific information that can cause some
conflict on a different host? What prevents you from moving this feature beyond
"emergency recoveries"? How can the clustering in 3.0 help with my use case?

(Stéphane Graber) #2

Clustering is unlikely to help you there unless you’re also using CEPH for storage but then again, you’d need a majority of the cluster nodes to be on at any one time, which doesn’t really fit what you describe.

In your case, the upcoming backup/restore API may be a better fit and will require a bit less time and work as publishing and importing an image.

You can learn more about that here:

I’ve also sent: