With the apt LXD package, I used Duplicati to backup container filesystems directly via their locations under “/var/lib/lxd/storage-pools/zfspool/containers/”. This all worked fine, they were all mounted there.
I migrated to the snap package last night and my container rootfs mounts are all gone from the host-- they are not located in “/var/snap/lxd/common/lxd/containers/” or “/var/snap/lxd/common/lxd/storage-pools/zfspool/containers/”. The latter location has directories named for each of my containers, but these directories are completely empty. I can’t find the rootfs mounts on the host’s filesystem anywhere. Any ideas?
Nope, that was actually the first thing I checked.
Here’s the entry for one of my containers, if it helps. Had to set the raw.lxc and privileged to get around the “Failed to reset devices.list” bug in 4.15 that I’m sure you know about.
The LXD snap runs in its own mount namespace which shields it from the main mount table and effectively hides all your container filesystems away. This is beneficial for a bunch of different reasons and makes LXD’s behavior more predictable.
For backup scripts, we provide a symlink as a gateway into that separate mount namespace, so going through the (rather long): /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/ you’ll be able to see your storage pools and containers so long as LXD has them mounted.
That did it, thank you! I really looked and was unable to find it, may I humbly suggest this be documented somewhere googleable? Other than this thread, I suppose.
My use-case is backups, I backup the filesystems directly via Duplicati which seems to work fine. Saves me writing a cron script to backup to tarballs, and then delete the old backups, etc.
Would you also be able to zfs mount the container datasets so they get mounted under /var/snap/lxd/common/lxd/storage-pools/zfspool/containers/ ?
In that case also stopped containers still get backed up (if backing up /var/snap/lxd/common/lxd/storage-pools/zfspool/containers/), otherwise duplicati will delete them for the latest view of backup (sure you have a history, but is not so convenient)
It doesn’t seem to do any harm (leave them zfs mounted though or also the hidden snap mountpoint will disappear). I can just stop and start containers as usual. Am I missing something?
This didn’t work for me. It worked for the first container I tried, but not for all the others. Then it stopped working for all of them. The tricky part is “so long as LXD has them mounted”. It seems that it prefers not to have them mounted most of the time.
Hmm indeed you’re right…just a freshly started container gets mounted, but after a while it is dismounted apparently. So will try further with just zfs mounting them before backup and leave them mounted I guess…
You can make a ZFS snapshot directly with ZFS, mount the snapshot somewhere (it’s readonly anyway), and use that (for backup or whatever else). Then unmount and destroy the snapshot. I do this in order to backup custom volumes.
This is my Main reason for not using that snap lxd at all. I want fail2ban to monitor logfile changes on the server, not being able (or unpredictable being able) to access the container directory is breaks the use case for snap lxd as a whole for me . It’s sad that you get stuck with a old lxd version just cause of a new shiny package manager, where the packages don’t even work exactly the same
To update an ancient thread, I stopped backing up this way. Instead I’m using a shell script wrapped around this guy’s LXDbackup script, which backs up my LXCs to tarballs and syncs them to encrypted offsite storage via rclone.