Mounting local directory to remote instance

Hi all,

We’re looking at adding support to craft-providers to interact with remote LXD instances and I’d like to get some feedback.

The craft applications need to mount a local directory inside the instance. Snapcraft supported this from versions 2.3 to 2.42. This was implemented almost 7 years ago and I hope there is now a better way.

My ideas are below.

option 1 - lxc config device add

For local instances, craft-providers currently calls lxc config device add local:<instance-name> disk-/root/project disk source=<local-project-path> path=/root/project.

However, this doesn’t support local directories in remote instances.

options 2 - lxc file mount

I don’t think lxc file mount will work, because it mounts in the wrong direction (a directory in the instance is mounted onto the host).

option 3 - lxc file push

This is probably a no-go. We need to live access to project directory to support features like snapcraft try.

option 4 - reverse sshfs mount

Snapcraft 2.x cleverly uses a reverse sshfs mount (see here).

This would work, but it’s quite hacky. And I am observing that lxc exec --cwd=<path-to-mounted-directory> enters at the base dir (/) instead of the mounted directory.


Anyone mind providing some feedback or ideas? Option 4 is the only option I imagine will work.

I think option 4 is the only one that is suitable currently.
Its quite an interesting idea though and maybe something we could build into LXD in the future.

CC @stgraber

I have some interest in this area. Just logically, you either use a network filesystem, or a cluster/sync filesystem

I like the look of seaweedfs for cluster filesystem, but someone else tried a bunch of options and decided that simple rsync with some tooling was “better” (over high latency)

Distributed file system inside LXC/LXD containers

I can’t find the link, but I found another discussion where someone had tried various options for a low latency local storage mount and came out in favour of sshfs as fastest for them

My only observation on sshfs is that it “gets stuck” in various corner cases. eg If you restart ssh (reboot) on the serving side machine, then it may not re-establish correctly. Or if you loose the link for a while. There are also concerns about caching if you modify it in multiple locations. Attaching database type files is sub optimal, etc

I guess there is also NFS/Samba though? (Or ceph)

I think the discussion I started still covers most of the options (there’s a few extra ones now) but it was mostly aimed at a geographically distributed cluster where 300ms+ read latency is a killer. We distinguish four scenarios.

  1. When there’s a secure, low-latency network, we use NFS. It is simple and fast (particularly if you enable jumbo frames). Mount to host then bind-mount in lxc/lxd.
  2. When the network isn’t secure and latency isn’t an issue, we use NFS-over-Wireguard. Same config - set it up on the host and bind-mount into lxc/lxd. Depending on traffic, encryption can start to take a reasonable amount of CPU, and sshfs sometimes hits this problem. Some options we didn’t consider in the discussion you cited are iSCSI and NBD. Quite a few people like NBD, which has TLS built-in.
  3. For geographically distributed systems where read latency is important, we still use csync2 + lsyncd in real life. It isn’t perfect, but it does the job and can be configured entirely inside an uprivileged container.
  4. For databases & the like (e.g. DNS, galera, mongodb) we do this at the application layer.