LXD with ZFS - where is my file?

Today I had my first attempt at using LXD with ZFS ‘properly’ as I created a dedicated mirrored ZFS pool for LXD. After creating my pool, I ran lxd init, configured it to use my pool. When I created a new LXD container it created a new ZFS dataset, as I expected.

This is where things go off the track for me. If I look at the output of zfs list to see where my LXD container is mounted and I cd to that mount point, it is empty even though I have successfully logged into my container and created a test file. I cannot find this test file when I search for it from my host.

How/where do I access the filesystems of LXD containers from the host system when using the ZFS storage backend? Is it not possible to achieve what I want without using ZFS for the host system too? My (test) LXD server is Ubuntu 20.04 using ext4 for the system drive.

Hi!

LXD uses namespaced mount-points for the containers, therefore a simple cd into a directory will not show you the files. You would need to first enter into LXD’s namespace and then you are set.

Here is the command:

sudo nsenter -t $(cat /var/snap/lxd/common/lxd.pid) -m

See more at https://blog.simos.info/how-to-view-the-files-of-your-lxd-container-from-the-host/

2 Likes

Hi Simos

Thanks, I can see my container contents now. I’m new to namespaced mount points. Why are they used for LXD, what is the advantage of using them and why can I not access the same range of programs after switching namespace, it seems to affect my PATH?

I could not see any mention of namespace mountpoints in the LXD docs so maybe I should open a bug report? I would expect to see namespaced mount-points at least mentioned on this page:

https://lxd.readthedocs.io/en/latest/storage/#sharing-with-the-host

but it doesn’t seem to be mentioned at all.

I have traditionally used zfs-auto-snapshot on my ZFS systems to automate snapshots. Does it still make sense to use zfs-auto-snapshot with LXD ZFS pools or does LXD have a similar feature I should use instead? Maybe zfs-auto-snapshot doesn’t work with namespaced mount points?

This isn’t an LXD specific thing, this is related to the snap packaging system, that uses mount namespaces to represent an isolated root filesystem for the snap. This is also why there are different programs available inside that mount namespace.

1 Like

LXD has an auto snapshot feature you may find useful. In general you should not alter or interact with the LXD managed storage pools directly as LXD will not be aware of changes and this is likely to cause unexpected problems.

See snapshots.schedule on https://linuxcontainers.org/lxd/docs/master/instances

1 Like

Thanks Thomas

It doesn’t sound like the LXD scheduled snapshot feature is as fully featured as zfs auto snapshot, the main difference being that you can configure the time period you wish to keep snapshots for with ZAS.

Would I be required to manually prune old snapshots or setup a cron job to remove snapshots older than a certain age with LXD?

You can use snapshots.expiry option for that. See https://linuxcontainers.org/lxd/docs/master/instances

1 Like

Oops! I missed that option. Thanks!

Does mount namespaces not introduce potential issues with data recovery? Say, for example, my LXD servers OS drive died and I didn’t have a backup of the OS drive. Might I have issues recovering all of the files in the pool if I imported the LXD ZFS pool on another machine?

I presume this means that LXD ZFS pools are only fully usable on Linux machines with snap installed ie I wouldn’t be able to access the container rootfs dirs by attaching the disks to a FreeBSD machine, for example?

No, the on-disk format is the same as using ZFS on Linux without mount namespaces, its just that we package the ZFS tools inside the the snap and the use of mount namespaces means that the ZFS tools on the host cannot see that some volumes are mounted.

Yesterday I installed lxd under Alpine Linux using ZFS for both the root disk and the lxd storage pool. Of course, Alpine does not use snaps and so the files were all visible without changing namespaces.

Is the only way to work around this under Ubuntu to build LXD from source rather than using the snap? Maybe there is a PPA for 20.04 of LXD as traditional Ubuntu deb packages?

Please explain why snap uses mount namespaces? What is the advantage? Ideally we would be able to disable this to save building LXD from source etc.

What is it that you’re trying to achieve by seeing the files mounted on the host?

Snap’s mount namespaces are used so that difference versions of software can be used independent from the host. This is what allows snap to bundle software inside it and have it run on multiple different versions of the host OS.

I’m thinking about data portability, accessibility and recovery. namespaces only seem to complicate things if I wanted to use zfs-auto-snapshot or other such tools. I’ve been using Proxmox with ZFS for a year or two now and I want a similar config where each container has its own regular dataset. If my proxmox server died, I could do a full recovery from any machine which could import ZFS which gives me a lot of options. My options with LXD installed via snap seem to be greatly reduced thanks to snaps use of namespaces.

I see. I think you may be conflating a few different aspects together though. Although to answer your question directly, the snap package is the only official distribution method for the 4.0 and above version of LXD.

However the mount namespace does not prevent using ZFS tools for all operations, its just that the mount namespace means the containers are not mounted directly on the host.

Perhaps an example would help, this is a fresh ubuntu 20.04 VM with LXD installed from snap:

# Create a ZFS storage pool on a loopback image (although recommend for production is to use a dedicated disk or partition for the zpool).
lxc storage create zfs zfs

# Launch container on ZFS pool.
lxc launch images:ubuntu/focal c1 -s zfs

# Try to use `zfs list` on host:
apt install zfsutils
zfs list
root@v1:/# zfs list
NAME                                                                          USED  AVAIL     REFER  MOUNTPOINT
zfs                                                                           212M  4.15G       24K  none
zfs/containers                                                               2.98M  4.15G       24K  none
zfs/containers/c1                                                            2.95M  4.15G      208M  /var/snap/lxd/common/lxd/storage-pools/zfs/containers/c1
zfs/custom                                                                     24K  4.15G       24K  none
zfs/deleted                                                                   120K  4.15G       24K  none
zfs/deleted/containers                                                         24K  4.15G       24K  none
zfs/deleted/custom                                                             24K  4.15G       24K  none
zfs/deleted/images                                                             24K  4.15G       24K  none
zfs/deleted/virtual-machines                                                   24K  4.15G       24K  none
zfs/images                                                                    208M  4.15G       24K  none
zfs/images/c3e80efdcd15823ef2f372955915f94f65a24a0444e5c32dada6a72ba6e31cd8   208M  4.15G      208M  /var/snap/lxd/common/lxd/storage-pools/zfs/images/c3e80efdcd15823ef2f372955915f94f65a24a0444e5c32dada6a72ba6e31cd8
zfs/virtual-machines                                                           24K  4.15G       24K  none

So the ZFS tool can see the volumes created by LXD inside the snap.
However if I go to mount path /var/snap/lxd/common/lxd/storage-pools/zfs/containers/c1 I can see it is empty from the host, but populated inside the snap’s mount namespace:

root@v1:/# ls -la /var/snap/lxd/common/lxd/storage-pools/zfs/containers/c1
total 8
d--x------ 2 root root 4096 Feb  1 10:02 .
drwx--x--x 3 root root 4096 Feb  1 10:02 ..

root@v1:/# sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- ls -la /var/snap/lxd/common/lxd/storage-pools/zfs/containers/c1
total 11
d--x------  4 1000000 root       6 Feb  1 10:02 .
drwx--x--x  3 root    root    4096 Feb  1 10:02 ..
-r--------  1 root    root    2999 Feb  1 10:02 backup.yaml
-rw-r--r--  1 root    root     526 Jan 31 07:53 metadata.yaml
drwxr-xr-x 17 1000000 1000000   23 Jan 31 07:53 rootfs
drwxr-xr-x  2 root    root       4 Jan 31 07:53 templates

If you need to temporarily mount a volume in the host namespace you can too (even while the container is running):

zfs mount zfs/containers/c1
ls -la /var/snap/lxd/common/lxd/storage-pools/zfs/containers/c1
total 11
d--x------  4 1000000 root       6 Feb  1 10:02 .
drwx--x--x  3 root    root    4096 Feb  1 10:02 ..
-r--------  1 root    root    3015 Feb  1 10:14 backup.yaml
-rw-r--r--  1 root    root     526 Jan 31 07:53 metadata.yaml
drwxr-xr-x 17 1000000 1000000   23 Jan 31 07:53 rootfs
drwxr-xr-x  2 root    root       4 Jan 31 07:53 templates

However you should ensure its unmounted before the container is stopped as that can cause issues when container is trying to clean up its own mount if the mount is still in use in the other namespace.

The topic of auto snapshots using external tools is separate from mount namespaces though. The reason that using an external tool to create snapshots of an LXD ZFS backed volume may cause issues is that LXD is not aware of them and so if you come to perform an operation on a container (e.g. copy, move, rename, resize, delete etc) it may fail due to not handling the external snapshots or it may result in data loss of the snapshots are removed unexpectedly. Also depending on the name of the snapshots it introduces the possibility at least of naming conflicts.

@stgraber is it possible to still use zfs recovery tools when using LXD with the snap?

1 Like

I have successfully imported a zfs pool into Alpine Linux that I created under Ubuntu 20.04 using the LXD snap and I was able to access the files stored in my test container so I think I’ve sufficiently convinced myself that snap using mount namespaces won’t cause me any DR issues.

1 Like

This may help in restoring a ZFS pool too:

@tomp , an old thread here Thomas but it is relevant to a problem I hit. Due to a failed remote copy, I have a partial container file(s) created but not yet registered in the LXD database.

Do you think I can manually sudo rm the file at:

HP-EliteDesk-20:/var/snap$ sudo zfs list
     default/containers/wrc_drupal-wrc                                                        1.07G   962M     1.07G  legacy

Would I need to namespace it as you mention above?

Thanks
Joe

You can do sudo zfs destroy default/containers/wrc_drupal-wrc