Error: Failed preparing container for start: Failed to run: zfs mount filesystem already mounted

I’m not able to start one of my lxc instances and I’m looking for some advice here to avoid messing up the rest of all containers.

Below is the situation:

I’m using lxd from the snap.

root@dwellir2:~# snap list
Name Version Rev Tracking Publisher Notes
core18 20210722 2128 latest/stable canonical✓ base
core20 20210702 1081 latest/stable canonical✓ base
lxd 4.0.7 21545 4.0/stable/… canonical✓ -
snapd 2.51.7 13170 latest/stable canonical✓ snapd

Trying to start the container fails:

root@dwellir2:~# lxc start juju-2f9e72-3
Error: Failed preparing container for start: Failed to run: zfs mount default/containers/juju-2f9e72-3: cannot mount ‘default/containers/juju-2f9e72-3’: filesystem already mounted
Try lxc info --show-log juju-2f9e72-3 for more info

Doing as proposed:

root@dwellir2:~# lxc info --show-log juju-2f9e72-3
Name: juju-2f9e72-3
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/09/14 20:17 UTC
Status: Stopped
Type: container
Profiles: default, juju-dwellir-kusama-rpc-2

Log:

root@dwellir2:~#

Things I’ve checked:

  • No disk errors.
  • No memory or cpu errors.

What should I do to try bring back (start) my container(s) from this state?

Same problem a few days ago. I have to reboot the whole system for this problem. :frowning:

Can you show output of:

sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- cat /proc/mounts

Also:

sudo snap info lxd


efault/containers/juju-443eaf-0 /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-443eaf-0 zfs rw,xattr,posixacl 0 0
default/containers/juju-2f9e72-1 /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-2f9e72-1 zfs rw,xattr,posixacl 0 0
default/containers/juju-443eaf-1 /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-443eaf-1 zfs rw,xattr,posixacl 0 0
default/containers/juju-58c5f4-0 /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-58c5f4-0 zfs rw,xattr,posixacl 0 0
default/containers/juju-2f9e72-3 /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-2f9e72-3 zfs rw,xattr,posixacl 0 0
default/containers/juju-2f9e72-4 /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-2f9e72-4 zfs rw,xattr,posixacl 0 0
default/containers/juju-45c61b-0 /var/snap/lxd/common/lxd/storage-pools/default/containers/juju-45c61b-0 zfs rw,xattr,posixacl 0 0
default/containers/juju-99112a-0 /var/snap/lxd/common/lxd/storage-pools/default/containers/juju-99112a-0 zfs rw,xattr,posixacl 0 0
default/containers/juju-99112a-1 /var/snap/lxd/common/lxd/storage-pools/default/containers/juju-99112a-1 zfs rw,xattr,posixacl 0 0
default/containers/juju-99112a-2 /var/snap/lxd/common/lxd/storage-pools/default/containers/juju-99112a-2 zfs rw,xattr,posixacl 0 0
/dev/loop0 /var/snap/lxd/common/lxd/storage-pools/juju-btrfs btrfs rw,relatime,ssd,space_cache,user_subvol_rm_allowed,subvolid=5,subvol=/ 0 0

root@dwellir2:~# snap info lxd
name: lxd
summary: LXD - container and VM manager
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact: Issues · lxc/incus · GitHub
license: unset
description: |
LXD is a system container and virtual machine manager.

It offers a simple CLI and REST API to manage local or remote instances,
uses an image based workflow and support for a variety of advanced features.

Images are available for all Ubuntu releases and architectures as well
as for a wide number of other Linux distributions. Existing
integrations with many deployment and operation tools, makes it work
just like a public cloud, except everything is under your control.

LXD containers are lightweight, secure by default and a great
alternative to virtual machines when running Linux on Linux.

LXD virtual machines are modern and secure, using UEFI and secure-boot
by default and a great choice when a different kernel or operating
system is needed.

With clustering, up to 50 LXD servers can be easily joined and managed
together with the same tools and APIs and without needing any external
dependencies.

Supported configuration options for the snap (snap set lxd [=…]):

- ceph.builtin: Use snap-specific Ceph configuration [default=false]
- ceph.external: Use the system's ceph tools (ignores ceph.builtin) [default=false]
- criu.enable: Enable experimental live-migration support [default=false]
- daemon.debug: Increase logging to debug level [default=false]
- daemon.group: Set group of users that can interact with LXD [default=lxd]
- daemon.preseed: Pass a YAML configuration to `lxd init` on initial start
- daemon.syslog: Send LXD log events to syslog [default=false]
- lvm.external: Use the system's LVM tools [default=false]
- lxcfs.pidfd: Start per-container process tracking [default=false]
- lxcfs.loadavg: Start tracking per-container load average [default=false]
- lxcfs.cfs: Consider CPU shares for CPU usage [default=false]
- openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
- shiftfs.enable: Enable shiftfs support [default=auto]

For system-wide configuration of the CLI, place your configuration in
/var/snap/lxd/common/global-conf/ (config.yml and servercerts)
commands:

  • lxd.benchmark
  • lxd.buginfo
  • lxd.check-kernel
  • lxd.lxc
  • lxd.lxc-to-lxd
  • lxd
  • lxd.migrate
    services:
    lxd.activate: oneshot, enabled, inactive
    lxd.daemon: simple, enabled, active
    snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
    tracking: 4.0/stable/ubuntu-20.04
    refresh-date: 12 days ago, at 01:53 UTC
    channels:
    latest/stable: 4.18 2021-09-13 (21497) 75MB -
    latest/candidate: 4.18 2021-09-14 (21554) 75MB -
    latest/beta: ↑
    latest/edge: git-2a036b5 2021-09-24 (21572) 76MB -
    4.18/stable: 4.18 2021-09-13 (21497) 75MB -
    4.18/candidate: 4.18 2021-09-15 (21554) 75MB -
    4.18/beta: ↑
    4.18/edge: ↑
    4.17/stable: 4.17 2021-08-26 (21390) 72MB -
    4.17/candidate: 4.17 2021-08-26 (21390) 72MB -
    4.17/beta: ↑
    4.17/edge: ↑
    4.16/stable: 4.16 2021-07-19 (21039) 71MB -
    4.16/candidate: 4.16 2021-08-02 (21198) 71MB -
    4.16/beta: ↑
    4.16/edge: ↑
    4.0/stable: 4.0.7 2021-09-14 (21545) 70MB -
    4.0/candidate: 4.0.7 2021-09-13 (21545) 70MB -
    4.0/beta: ↑
    4.0/edge: git-08134fb 2021-09-25 (21577) 70MB -
    3.0/stable: 3.0.4 2019-10-10 (11348) 55MB -
    3.0/candidate: 3.0.4 2019-10-10 (11348) 55MB -
    3.0/beta: ↑
    3.0/edge: git-81b81b9 2019-10-10 (11362) 55MB -
    2.0/stable: 2.0.12 2020-08-18 (16879) 38MB -
    2.0/candidate: 2.0.12 2021-03-22 (19859) 39MB -
    2.0/beta: ↑
    2.0/edge: git-82c7d62 2021-03-22 (19857) 39MB -
    installed: 4.0.7 (21545) 70MB -

I think that the automatic upgade of the host might have caused this and got out of sync with the “snap” version of lxd or something.

I suspect we need to turn the “automatic upgrade” off after we also probably need to reboot the server.

Can you show output of:

sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- ls /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-2f9e72-3

Can you try running:

sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- umount /var/snap/lxd/common/shmounts/storage-pools/default/containers/juju-2f9e72-3

I have the same issue here. lxd isn’t unmounting the filesystem.

container7:~$ lxc start elasticmaster-logging-1
Error: Failed preparing container for start: Failed to run: zfs mount lxd-zpool/containers/elasticmaster-logging-1: cannot mount 'lxd-zpool/containers/elasticmaster-logging-1': filesystem already mounted
Try `lxc info --show-log elasticmaster-logging-1` for more info


container7:~$ sudo nsenter --mount=/run/snapd/ns/lxd.mnt -- cat /proc/mounts | grep elasticmaster-logging-1
lxd-zpool/containers/elasticmaster-logging-1 /var/snap/lxd/common/shmounts/storage-pools/local/containers/elasticmaster-logging-1 zfs rw,xattr,posixacl 0 0

So it is correct that it is in fact mounted. It’s not a normal mountpoint so I’m not sure how to unmount it.

You can check if its mounted in the host’s mount namespace as well as the snap’s mount namespace, and then where ever its mounted you can run:

sudo umount /var/snap/lxd/common/shmounts/storage-pools/local/containers/elasticmaster-logging-1

See this post from @stgraber for more info about this issue.

No luck. I also searched tested all occurences of elasticmaster-logging-1 in /var/snap/lxd/ and nothing is able to unmount.

$ sudo umount /var/snap/lxd/common/shmounts/storage-pools/local/containers/elasticmaster-logging-1
[sudo] password for wayne: 
umount: /var/snap/lxd/common/shmounts/storage-pools/local/containers/elasticmaster-logging-1: no mount point specified.

What about inside the mount namespace, as mentioned here

I’m going through this thread now.

I went through that thread. Looks like a reboot is on the table. Any idea why I wouldn’t see anything in /var/snap/lxd/common/shmounts/ ? It seems to be in the temporary solution but mine is empty. Is it because I’m running zfs?

I found the zfs specific unmount option. No luck there either.

root@container7:~# zfs get -r mounted lxd-zpool |grep elasticmaster-logging-1
lxd-zpool/containers/elasticmaster-logging-1          
root@container7:~# zfs unmount lxd-zpool/containers/elasticmaster-logging-1
cannot unmount 'lxd-zpool/containers/elasticmaster-logging-1': not currently mounted

I just wanted to update that I first had to purge data inside of a container to get zfs below 90% which caused my iowait to jump high.

Once the host had calmed down, I placed all containers in a “dont start automatically” (this would likely have pushed the host into a iowait problem again).

I finally rebooted.

That was the way I had to get through this. Sorry for not being able to provide more details, but I had to save my workload.