Rootfs mount locations on the host with snap lxd


#1

With the apt LXD package, I used Duplicati to backup container filesystems directly via their locations under “/var/lib/lxd/storage-pools/zfspool/containers/”. This all worked fine, they were all mounted there.

I migrated to the snap package last night and my container rootfs mounts are all gone from the host-- they are not located in “/var/snap/lxd/common/lxd/containers/” or “/var/snap/lxd/common/lxd/storage-pools/zfspool/containers/”. The latter location has directories named for each of my containers, but these directories are completely empty. I can’t find the rootfs mounts on the host’s filesystem anywhere. Any ideas?


#2

If you run mount, does the output help you?


#3

Nope, that was actually the first thing I checked.

Here’s the entry for one of my containers, if it helps. Had to set the raw.lxc and privileged to get around the “Failed to reset devices.list” bug in 4.15 that I’m sure you know about.

architecture: x86_64
config:
  boot.autostart: "true"
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20180724)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20180724"
  image.version: "18.04"
  raw.lxc: lxc.cgroup.devices.allow=a
  security.privileged: "true"
  volatile.base_image: 38219778c2cf02521f34f950580ce3af0e4b61fbaf2b4411a7a6c4f0736071f9
  volatile.eth0.hwaddr: 00:16:3e:42:e5:dd
  volatile.idmap.base: "0"
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
devices:
  nas:
    path: /nas
    source: /media/Nastassia
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

And here is my profile:

config:
  environment.TZ: America/New_York
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: zfspool
    type: disk
name: default
used_by:
- /1.0/containers/nzbget
- /1.0/containers/pihole
- /1.0/containers/plex
- /1.0/containers/radarr
- /1.0/containers/sonarr
- /1.0/containers/torrent
- /1.0/containers/unifi
- /1.0/containers/vpn

Thanks for any help you can provide!


(Stéphane Graber) #4

The LXD snap runs in its own mount namespace which shields it from the main mount table and effectively hides all your container filesystems away. This is beneficial for a bunch of different reasons and makes LXD’s behavior more predictable.

For backup scripts, we provide a symlink as a gateway into that separate mount namespace, so going through the (rather long): /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/ you’ll be able to see your storage pools and containers so long as LXD has them mounted.


#5

That did it, thank you! I really looked and was unable to find it, may I humbly suggest this be documented somewhere googleable? Other than this thread, I suppose.

My use-case is backups, I backup the filesystems directly via Duplicati which seems to work fine. Saves me writing a cron script to backup to tarballs, and then delete the old backups, etc.


(Idef1x) #6

Would you also be able to zfs mount the container datasets so they get mounted under /var/snap/lxd/common/lxd/storage-pools/zfspool/containers/ ?
In that case also stopped containers still get backed up (if backing up /var/snap/lxd/common/lxd/storage-pools/zfspool/containers/), otherwise duplicati will delete them for the latest view of backup (sure you have a history, but is not so convenient)

It doesn’t seem to do any harm (leave them zfs mounted though or also the hidden snap mountpoint will disappear). I can just stop and start containers as usual. Am I missing something?