Debian 10 with lxd + snapd, access to file system of stopped container

Hallo, am I correct in thinking that there is no access to the file system of a stopped container in this configuration? I see the (tortuous) path works when a container is running but goes nowhere when it is stopped. I wonder therefore how one might fix a container that won’t boot after an upgrade for example, or which after a clone/copy shares an IP address with its original? At the moment I have to boot and use lxc exec /bin/bash to edit live, I can see no way to preset settings before starting the container.

Am I missing something really basic? This seems to be a serious flaw for production containers…

Am I correct in thinking that there is no access to the file system of a stopped container in this configuration

lxc file push & lxc file pull will work with stopped containers I.E

lxc file push new_config.yaml MY_CONTAINER/etc/netplan/FILE_TO_REPLACE.yaml

Terrific - that works!

Not sure I’d want to troubleshoot one file at a time, but for config tweaks from clones etc. this is perfect.

Thank you.

You can also access directly the files with knowledge of your storage pool details.

If the storage pool is ZFS, then this works:

$ lxc stop mycontainer
$ zfs list | grep mycontainer
lxd/containers/mycontainer      5,18M  354,3G   541M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer
$ sudo zfs mount lxd/containers/mycontainer
$ sudo ls -l /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer
total 6
-r--------  1 root root 2733 Φεβ  23 15:55 backup.yaml
-rw-r--r--  1 root root 1047 Φεβ  23 01:33 metadata.yaml
drwxr-xr-x 18 root root   24 Φεβ  23 00:06 rootfs
drwxr-xr-x  2 root root    7 Φεβ  23 01:33 templates
$ # Access the filesystem in "rootfs".
$ sudo zfs umount lxd/containers/mycontainer

Brilliant - I hadn’t got the path right for the zfs mount - your command makes this so simple to extract! One of those “why didn’t I think of this moments” :wink:

OK, so that was my last concern sorted out - thank you very much indeed.

except that after I umount the container it refuses to start via lxc start stating that the directory (insert long path here) is not empty ??? Any ideas? I tried umount again and get this, so zfs doesn’t think it’s there…

zfs umount pool1/containers/container_name

cannot unmount ‘pool1/containers/container_name’: not currently mounted

So I looked and there was a file backup.yaml in the directory. Moved this elsewhere and the container starts fine.

What created the backup.yaml file - as far as I am concerned I didn’t start the container or run any lxc command that might have created this backup? Any ideas why it appeared. Is this expected behaviour?

You can safely remove that backup.yaml. Then, the container can start.

I had a typo above (that I fixed). It’s zfs umount, not plain umount.

Yup, I sorted all that, but was wondering what created that backup.yaml file.

I reran the process with another container on another host to see what happens. Looks like the backup.yaml was a one time glitch, the process ran fine with no strays the second time.

Thread closed! Thank you again.

When you run lxc stop mycontainer, it does not umount the container.
By running zfs mount, we get a new (second) mount in our namespace to do our tasks easily.
The container is not running, so LXD should not mind.

Most likely the umount triggers LXD to tear down its own mount as well, and place the backup.yaml file in there in case we want to recover. And because it is an outside operation, it confuses LXD.