Import lxc container from Crashed OS

I’ll try to import containers on a zfs pool connected to a boot disk,containing Ubuntu Server, but probably missing something obviously. Tried to look up similar cases already.
Im running Ubuntu 20.04

root@freenas:/var/snap/lxd/common/lxd/images# sudo lxd import Plex
Error: The instance's directory "/var/snap/lxd/common/lxd/storage-pools/default/containers/Plex" appears to be empty. Please ensure that the instance's storage volume is mounted

Mount point is not empty:

root@freenas:/var/snap/lxd/common/lxd/images# ls -l /var/snap/lxd/common/lxd/storage-pools/default/containers/Plex/
total 9
-r--------  1 root root 2260 Jun 10 17:26 backup.yaml
-rw-r--r--  1 root root 1049 May 30 01:16 metadata.yaml
drwxr-xr-x 18 root root   24 Jun 10 17:26 rootfs
drwxr-xr-x  2 root root    7 May 30 01:16 templates
root@freenas:/var/snap/lxd/common/lxd/images#

the mount point

root@freenas:/var/snap/lxd/common/lxd/images# zfs list
NAME                                                                                         USED  AVAIL     REFER  MOUNTPOINT
mydata                                                                                       641G  13.4T      631G  /mydata
mydata/LXD                                                                                  10.2G  13.4T       24K  none
mydata/LXD/containers                                                                       7.65G  13.4T       24K  none
mydata/LXD/containers/OsSec                                                                  518M  13.4T     1.33G  /var/snap/lxd/common/lxd/storage-pools/default/containers/OsSec
mydata/LXD/containers/Plex                                                                  6.97G  13.4T     7.79G  /var/snap/lxd/common/lxd/storage-pools/default/containers/Plex
mydata/LXD/containers/Tor                                                                    170M  13.4T      584M  /var/snap/lxd/common/lxd/storage-pools/default/containers/Tor
mydata/LXD/custom                                                                             24K  13.4T       24K  none
mydata/LXD/deleted                                                                          1.26G  13.4T       24K  none
mydata/LXD/deleted/containers                                                                 24K  13.4T       24K  none
cat /var/snap/lxd/common/lxd/storage-pools/default/containers/Plex/backup.yaml

container:
  architecture: x86_64
  config:
    boot.autostart: "true"
    image.architecture: amd64
    image.description: ubuntu 20.04 LTS amd64 (release) (20200529.1)
    image.label: release
    image.os: ubuntu
    image.release: focal
    image.serial: "20200529.1"
    image.type: squashfs
    image.version: "20.04"
    raw.apparmor: mount fstype=nfs,
    security.privileged: "true"
    volatile.base_image: 42424696bc6fc56ebff11508b597946b8f451f002a780bf0a943051130712a8f
    volatile.eth0.host_name: veth84846129
    volatile.eth0.hwaddr: 4e:08:39:2f:0f:d0
    volatile.eth0.name: eth0
    volatile.idmap.base: "0"
    volatile.idmap.current: '[]'
    volatile.idmap.next: '[]'
    volatile.last_state.idmap: '[]'
    volatile.last_state.power: STOPPED
  devices: {}
  ephemeral: false
  profiles:
  - lanprofile
  stateful: false
  description: ""
  created_at: 2020-06-04T18:25:56.744383356+02:00
  expanded_config:
    boot.autostart: "true"
    image.architecture: amd64
    image.description: ubuntu 20.04 LTS amd64 (release) (20200529.1)
    image.label: release
    image.os: ubuntu
    image.release: focal
    image.serial: "20200529.1"
    image.type: squashfs
    image.version: "20.04"
    raw.apparmor: mount fstype=nfs,
    security.privileged: "true"
    volatile.base_image: 42424696bc6fc56ebff11508b597946b8f451f002a780bf0a943051130712a8f
    volatile.eth0.host_name: veth84846129
    volatile.eth0.hwaddr: 4e:08:39:2f:0f:d0
    volatile.eth0.name: eth0
    volatile.idmap.base: "0"
    volatile.idmap.current: '[]'
    volatile.idmap.next: '[]'
    volatile.last_state.idmap: '[]'
    volatile.last_state.power: STOPPED
  expanded_devices:
   eth0:
      nictype: bridged
      parent: br0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: Plex
  status: Stopped
  status_code: 102
  last_used_at: 2020-06-10T07:01:04.937544907+02:00
  location: none
  type: container
snapshots: []
pool:
  config:
    source: mydata/LXD
    volatile.initial_source: mydata/LXD
    zfs.pool_name: mydata/LXD
  description: ""
  name: default
  driver: zfs
  used_by: []
  status: Created
  locations:
  - none
volume:
  config: {}
  description: ""
  name: Plex
  type: container
  used_by: []
  location: none

Can someone help me out?
This is a priviledged container

Geert

I am using lxd 4.2, and I checked that the container used in host A was imported from host B and executed normally.

If the zfs pool is mydata, it seems that mydata appeared when I lxc storage list on the crashed host. is not it?

Looking at what you uploaded, are you trying to import the plex from the existing mydata storage pool to the default pool?

The procedure below is for the following conditions.
The procedure below is for importing into the same storage pool.

  1. This is the case when mydata storagepool is zfs exported from the crashed host and imported from the newly installed host.
  2. Before the zpool import mydata on the new host, and after importing, the lxc list and lxc storage list should output normally, except for mydata.
  3. On the new host, ps -ef | grep -i daemon.start should show only one.

Try to proceed as follows.

;# Umount the current container-related mount (zfs mount, plex). (umount /var/snap/lxd/common/lxd/storage-pools/default/containers/Plex)

;# nsenter -t $(pgrep daemon.start) -m
;# mkdir -p /var/snap/lxd/common/lxd/storage-pools/mydata/LXD/containers/Plex
;# mount -t zfs mydata/LXD/containers/Plex /var/snap/lxd/common/lxd/storage-pools/mydata/LXD/containers/Plex
;# exit
;# lxd import --force Plex

Thank you Lee for the answer.

However it looked to be the promissing and usefull answer, but it failed for my Ubuntu OS.

When opening the namespace with nsenter, I’m unable to mount the containers.
Let me explain what is happening

root@freenas:/usr/sbin# zfs
-bash: zfs: command not found

so I found some zfs installs on lxd

root@freenas:/usr/sbin# /snap/lxd/15564/zfs-0.8/bin/zfs
/snap/lxd/15564/zfs-0.8/bin/zfs: error while loading shared libraries: libnvpair.so.1: cannot open shared object file: No such file or directory
root@freenas:/usr/sbin#

in /usr/sbin there is no zfs command anymore when entering the namespace.
the zfs-0.6 trough zfs0.8 have all the same issue.

so its virtually impossible to mount zfs from the namespace. I guess this is an Ubuntu issue, rather than a LXD one.

I found this reference on github.

As I understand, one strategy coud be recompille zfs in the namespace, if this is possible.
Anybody a suggestion?
Dropping Ubuntu is also a possibility, but it then case I rather recreate the containers from scratch.

Geert