Cant seem to export ZFS backed container (already mounted?)

Having issues backing up a container with the lxc export command. I can back it up manually with zfs send/receive but not with the lxc cli.

I seem to get an issue with the filesystem being mounted but I can’t seem to figure out a way to unmount it because it says it isn’t mounted when I try?

    root@p68 /zfs3/lxd-backups/p68/lxd # lxc export docs --compression none --instance-only --optimized-storage --debug docs-backup.tar.xz
DBUG[04-03|16:02:21] Connecting to a local LXD over a Unix socket
DBUG[04-03|16:02:21] Sending request to LXD                   method=GET url=http://unix.socket/1.0 etag=
DBUG[04-03|16:02:21] Got response struct from LXD
                "config": {
                        "core.https_address": "[::]",
                        "core.trust_password": true
                "api_extensions": [
                "api_status": "stable",
                "api_version": "1.0",
                "auth": "trusted",
                "public": false,
                "auth_methods": [
                "environment": {
                        "addresses": [
                        "architectures": [
                        "certificate": "-----BEGIN CERTIFICATE-----\nMIIB+jCCAYCgAwIBAgIQXkNER1Jpm+7ZLswstYQfIDAKBggqhkjOPQQDAzAxMRww\nGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMREwDwYDVQQDDAhyb290QHA2ODAe\nFw0xOTExMTYxNDI0MjlaFw0yOTExMTMxNDI0MjlaMDExHDAaBgNVBAoTE2xpbnV4\nY29udGFpbmVycy5vcmcxETAPBgNVBAMMCHJvb3RAcDY4MHYwEAYHKoZIzj0CAQYF\nK4EEACIDYgAEJEpiFF8fSwJtQafwrAEnb3VpnjSGqpl9bjWWwtlV3ZBDsBV670g/\nkiVre1a0dgv1cx1eKIgsK5pZe/colAmOW7Z8jlCilTL8SkdRP4gQzmBQ+zJoWJTM\no/4J4xqHOJKZo10wWzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH\nAwEwDAYDVR0TAQH/BAIwADAmBgNVHREEHzAdggNwNjiHBFhjQyaHECoBBPgBChNl\nAAAAAAAAAAIwCgYIKoZIzj0EAwMDaAAwZQIwW+rVUMiGByI+WlI6y86xcTEG+tzH\nTC0JQ89tdBjeXZSm+l76BMCM5Ldm/5Xl2nTZAjEAs8Wi4O85sy0LlDgvOTsDCosM\n8twfdK58/4RkKSJ1tSHXlmOXLj1tteHBSoaL8ORp\n-----END CERTIFICATE-----\n",
                        "certificate_fingerprint": "f59c9536f7d3f2e6d1457390484cc00ce363889c68230ec7464f26062af9a8ee",
                        "driver": "lxc",
                        "driver_version": "4.0.0",
                        "firewall": "xtables",
                        "kernel": "Linux",
                        "kernel_architecture": "x86_64",
                        "kernel_features": {
                                "netnsid_getifaddrs": "true",
                                "seccomp_listener": "true",
                                "seccomp_listener_continue": "true",
                                "shiftfs": "false",
                                "uevent_injection": "true",
                                "unpriv_fscaps": "true"
                        "kernel_version": "5.3.0-42-generic",
                        "lxc_features": {
                                "cgroup2": "true",
                                "mount_injection_file": "true",
                                "network_gateway_device_route": "true",
                                "network_ipvlan": "true",
                                "network_l2proxy": "true",
                                "network_phys_macvlan_mtu": "true",
                                "network_veth_router": "true",
                                "seccomp_notify": "true"
                        "project": "default",
                        "server": "lxd",
                        "server_clustered": false,
                        "server_name": "",
                        "server_pid": 27181,
                        "server_version": "4.0.0",
                        "storage": "dir | lvm | zfs",
                        "storage_version": "1 | 2.02.133(2) (2015-10-30) / 1.02.110 (2015-10-30) / 4.40.0 | 0.8.1-1ubuntu14.3"
DBUG[04-03|16:02:21] Connected to the websocket: ws://unix.socket/1.0/events
DBUG[04-03|16:02:21] Sending request to LXD                   method=POST url=http://unix.socket/1.0/instances/docs/backups etag=
                "name": "",
                "expires_at": "2020-04-04T16:02:21.646081049+02:00",
                "instance_only": true,
                "container_only": true,
                "optimized_storage": true,
                "compression_algorithm": "none"
DBUG[04-03|16:02:21] Got operation from LXD
                "id": "ac58cac7-fb58-4f83-9fc1-ad81a6883f7f",
                "class": "task",
                "description": "Backing up container",
                "created_at": "2020-04-03T16:02:21.647877081+02:00",
                "updated_at": "2020-04-03T16:02:21.647877081+02:00",
                "status": "Running",
                "status_code": 103,
                "resources": {
                        "backups": [
                        "containers": [
                        "instances": [
                "metadata": null,
                "may_cancel": false,
                "err": "",
                "location": "none"
DBUG[04-03|16:02:21] Sending request to LXD                   method=GET url=http://unix.socket/1.0/operations/ac58cac7-fb58-4f83-9fc1-ad81a6883f7f etag=
DBUG[04-03|16:02:21] Got response struct from LXD
                "id": "ac58cac7-fb58-4f83-9fc1-ad81a6883f7f",
                "class": "task",
                "description": "Backing up container",
                "created_at": "2020-04-03T16:02:21.647877081+02:00",
                "updated_at": "2020-04-03T16:02:21.647877081+02:00",
                "status": "Running",
                "status_code": 103,
                "resources": {
                        "backups": [
                        "containers": [
                        "instances": [
                "metadata": null,
                "may_cancel": false,
                "err": "",
                "location": "none"
Error: Create backup: Backup create: Failed to run: zfs mount zfs1/containers/docs: cannot mount 'zfs1/containers/docs': filesystem already mounted
root@p68 /zfs3/lxd-backups/p68/lxd # umount /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs
umount: /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs: not mounted.
root@p68 /zfs3/lxd-backups/p68/lxd # zfs unmount -f zfs1/containers/docs
cannot unmount 'zfs1/containers/docs': not currently mounted
root@p68 /zfs3/lxd-backups/p68/lxd # zfs list | grep docs
zfs1/containers/docs                                                                   162G   214G   134G  /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs
zfs1/containers/docs-old                                                               119G   214G   119G  /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs-old
root@p68 /zfs3/lxd-backups/p68/lxd #
nsenter --mount=/run/snapd/ns/lxd.mnt grep containers/docs /proc/self/mountinfo
nsenter --mount=/run/snapd/ns/lxd.mnt stat -f /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs
nsenter --mount=/run/snapd/ns/lxd.mnt stat -f /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/

For us to debug this.

Then to workaround it:

nsenter --mount=/run/snapd/ns/lxd.mnt umount nsenter --mount=/run/snapd/ns/lxd.mnt stat -f /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs
root@p68 /zfs3/lxd-backups # nsenter --mount=/run/snapd/ns/lxd.mnt grep containers/docs /proc/self/mountinfo
2615 3005 0:89 / /var/snap/lxd/common/shmounts/storage-pools/zfs1/containers/docs rw shared:741 - zfs zfs1/containers/docs rw,xattr,posixacl
534 2220 0:52 / /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs-old rw shared:291 - zfs zfs1/containers/docs-old rw,xattr,posixacl

 root@p68 /zfs3/lxd-backups #
    root@p68 /zfs3/lxd-backups # nsenter --mount=/run/snapd/ns/lxd.mnt stat -f /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs
      File: "/var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs"
        ID: 38c6c4c81d15e473 Namelen: 255     Type: ext2/ext3
    Block size: 4096       Fundamental block size: 4096
    Blocks: Total: 51342842   Free: 30844770   Available: 28219286
    Inodes: Total: 13107200   Free: 11420346

 root@p68 /zfs3/lxd-backups # nsenter --mount=/run/snapd/ns/lxd.mnt stat -f /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/
      File: "/var/snap/lxd/common/lxd/storage-pools/zfs1/containers/"
        ID: 38c6c4c81d15e473 Namelen: 255     Type: ext2/ext3
    Block size: 4096       Fundamental block size: 4096
    Blocks: Total: 51342842   Free: 30844770   Available: 28219286
    Inodes: Total: 13107200   Free: 11420346

Ok, so exact same issue then…

Can you show:

nsenter --mount=/run/snapd/ns/lxd.mnt grep shmounts /proc/self/mountinfo
journalctl -u snap.lxd.daemon -n 300

To fix that particular one you’ll actually need:

nsenter --mount=/run/snapd/ns/lxd.mnt umount /var/snap/lxd/common/shmounts/storage-pools/zfs1/containers/docs/
1 Like

Forgot to say, the last command you sent failed ? the unmount one?
root@p68 /zfs3/lxd-backups # nsenter --mount=/run/snapd/ns/lxd.mnt umount nsenter --mount=/run/snapd/ns/lxd.mnt stat -f /var/snap/lxd/common/lxd/storage-pools/zfs1/containers/docs
umount: unrecognized option ‘–mount=/run/snapd/ns/lxd.mnt’

umount [-hV]
umount -a [options]
umount [options] |

Unmount filesystems.

-a, --all unmount all filesystems
-A, --all-targets unmount all mountpoints for the given device in the
current namespace
-c, --no-canonicalize don’t canonicalize paths
-d, --detach-loop if mounted loop device, also free this loop device
–fake dry run; skip the umount(2) syscall
-f, --force force unmount (in case of an unreachable NFS system)
-i, --internal-only don’t call the umount. helpers
-n, --no-mtab don’t write to /etc/mtab
-l, --lazy detach the filesystem now, clean up things later
-O, --test-opts limit the set of filesystems (use with -a)
-R, --recursive recursively unmount a target with all its children
-r, --read-only in case unmounting fails, try to remount read-only
-t, --types limit the set of filesystem types
-v, --verbose say what is being done

-h, --help display this help and exit
-V, --version output version information and exit

For more details see umount(8).

Oops, that was some bad copy/paste in the middle of that one, anyway it would have failed for another reason.

The one I gave in my last comment should work though.

Seems to be backing up / exporting now!

Thanks :grin:

Still have issues, albeit different.
I thought the newer versions of LXD would not fill up the root dir? I can see root filling up and it being a 150GB container it fails.
I thought with newer versions it was able to not use a temporary storage in root?
I know there is a way to over-ride where it stores the temporary files but thought this was an older workaround?


Are you using optimized backup or non optimized?

The optimized version still writes a temporary file per volume as the zfs tool doesn’t provide a way of knowing how big the dump file will be which is needed before it is added to the tarball to populate the header. However it deletes the file after writing to the tarball before generating a dump file for each snapshot. So still less temporary storage needed than before.

Non optimized backup mode mounts the volume and adds each file individually to the tarball avoiding a temporary copy of the whole volume that was done previously.

Worth keeping in mind though that the tarball itself is fully generated before the client downloads it to avoid partially downloaded files. This means the full tarball will exist for a short period of time in both the temporary location and the download location.

I’m using optimized.

OK I’ll go back to using my manual backup which is to use zfs send and receive as this doesn’t create any temporary files.

sudo /sbin/zfs send $LOCALSTORAGEPOOL/containers/$C@snapshot-snap1 | /usr/bin/mbuffer  | /usr/bin/pigz -0  | /usr/bin/mbuffer > /zfs3/lxd-backups/$LOCALHOST/lxd/$C.tar.xz

storage.backups_volume and storage.images_volume can be used to change the target.

And indeed, our optimizations have been reducing the number of file copies, in some cases down to just one, some other cases still need an intermediate copy as is the case here. That’s still one less copy than we used to do though :slight_smile:

We’re now limited by constraints of the tar format. To write a file you need to know its size. If the zfs send tool can’t tell us the exact byte size of an export ahead of time, then we have no other options but to write it to a temporary location.

1 Like

One thing to note about that approach is that it isn’t creating a tarball from what I can tell. Although the file’s extension is tar.xz from what I understand of the source generator command pigz it is creating a gzip compressed file. Gzip files do support streaming into without knowing the source file size up front.

Unfortunately tarballs do not. And because our export process stores multiple files (the instance volume, config files and potentially multiple snapshots) the gzip format by itself is not sufficient to generate a single file from the export.

Ah thanks for clearing that up Tom! I always wondered whether it was actually creating a tar file. Shows how much I know about different archive types.
Yeah this quick and dirty export to gzip seems to work but its a more manual process restoring it. I only ever use it as a last resort as normally I send incremental backups between servers using syncoid.