Rootfs mount locations on the host with snap lxd

With the apt LXD package, I used Duplicati to backup container filesystems directly via their locations under “/var/lib/lxd/storage-pools/zfspool/containers/”. This all worked fine, they were all mounted there.

I migrated to the snap package last night and my container rootfs mounts are all gone from the host-- they are not located in “/var/snap/lxd/common/lxd/containers/” or “/var/snap/lxd/common/lxd/storage-pools/zfspool/containers/”. The latter location has directories named for each of my containers, but these directories are completely empty. I can’t find the rootfs mounts on the host’s filesystem anywhere. Any ideas?

1 Like

If you run mount, does the output help you?

Nope, that was actually the first thing I checked.

Here’s the entry for one of my containers, if it helps. Had to set the raw.lxc and privileged to get around the “Failed to reset devices.list” bug in 4.15 that I’m sure you know about.

architecture: x86_64
config:
  boot.autostart: "true"
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20180724)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20180724"
  image.version: "18.04"
  raw.lxc: lxc.cgroup.devices.allow=a
  security.privileged: "true"
  volatile.base_image: 38219778c2cf02521f34f950580ce3af0e4b61fbaf2b4411a7a6c4f0736071f9
  volatile.eth0.hwaddr: 00:16:3e:42:e5:dd
  volatile.idmap.base: "0"
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
devices:
  nas:
    path: /nas
    source: /media/Nastassia
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

And here is my profile:

config:
  environment.TZ: America/New_York
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: zfspool
    type: disk
name: default
used_by:
- /1.0/containers/nzbget
- /1.0/containers/pihole
- /1.0/containers/plex
- /1.0/containers/radarr
- /1.0/containers/sonarr
- /1.0/containers/torrent
- /1.0/containers/unifi
- /1.0/containers/vpn

Thanks for any help you can provide!

The LXD snap runs in its own mount namespace which shields it from the main mount table and effectively hides all your container filesystems away. This is beneficial for a bunch of different reasons and makes LXD’s behavior more predictable.

For backup scripts, we provide a symlink as a gateway into that separate mount namespace, so going through the (rather long): /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/ you’ll be able to see your storage pools and containers so long as LXD has them mounted.

3 Likes

That did it, thank you! I really looked and was unable to find it, may I humbly suggest this be documented somewhere googleable? Other than this thread, I suppose.

My use-case is backups, I backup the filesystems directly via Duplicati which seems to work fine. Saves me writing a cron script to backup to tarballs, and then delete the old backups, etc.

Would you also be able to zfs mount the container datasets so they get mounted under /var/snap/lxd/common/lxd/storage-pools/zfspool/containers/ ?
In that case also stopped containers still get backed up (if backing up /var/snap/lxd/common/lxd/storage-pools/zfspool/containers/), otherwise duplicati will delete them for the latest view of backup (sure you have a history, but is not so convenient)

It doesn’t seem to do any harm (leave them zfs mounted though or also the hidden snap mountpoint will disappear). I can just stop and start containers as usual. Am I missing something?

1 Like

This didn’t work for me. It worked for the first container I tried, but not for all the others. Then it stopped working for all of them. The tricky part is “so long as LXD has them mounted”. It seems that it prefers not to have them mounted most of the time.

Hmm indeed you’re right…just a freshly started container gets mounted, but after a while it is dismounted apparently. So will try further with just zfs mounting them before backup and leave them mounted I guess…

You can make a ZFS snapshot directly with ZFS, mount the snapshot somewhere (it’s readonly anyway), and use that (for backup or whatever else). Then unmount and destroy the snapshot. I do this in order to backup custom volumes.

2 Likes

This is my Main reason for not using that snap lxd at all. I want fail2ban to monitor logfile changes on the server, not being able (or unpredictable being able) to access the container directory is breaks the use case for snap lxd as a whole for me . It’s sad that you get stuck with a old lxd version just cause of a new shiny package manager, where the packages don’t even work exactly the same

1 Like

To update an ancient thread, I stopped backing up this way. Instead I’m using a shell script wrapped around this guy’s LXDbackup script, which backs up my LXCs to tarballs and syncs them to encrypted offsite storage via rclone.

https://github.com/cloudrkt/lxdbackup