Two containers started but other not after upgrading to Ubuntu 20.04

After upgrading my Ubuntu to 20.04 , lxd didn’t start so I removed two files from /…/global and now ok
And now I can see containers list but two of them starts and two not

One which doesn’t start:

root@jp-laptop:~# lxc list jp-pss
+----------------+---------+------+------+-----------+-----------+
|      NAME      |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+----------------+---------+------+------+-----------+-----------+
| jp-pss | STOPPED |      |      | CONTAINER | 0         |
+----------------+---------+------+------+-----------+-----------+
root@jp-laptop:~# 
root@jp-laptop:~# lxc start jp-pss
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart jp-pss
 /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/jp-pss/lxc.conf: 
Try `lxc info --show-log jp-pss` for more info
root@jp-laptop:~# lxc info --show-log jp-pss
Name: jp-pss
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/01/10 08:52 UTC
Status: Stopped
Type: container
Profiles: default

Log:

lxc jp-pss 20200427101700.195 ERROR    cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1143 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.jp-pss"
lxc jp-pss 20200427101700.197 ERROR    cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1143 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.jp-pss"
lxc jp-pss 20200427101700.204 WARN     cgfsng - cgroups/cgfsng.c:fchowmodat:1455 - No such file or directory - Failed to fchownat(17, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc jp-pss 20200427101700.285 ERROR    dir - storage/dir.c:dir_mount:152 - No such file or directory - Failed to mount "/var/snap/lxd/common/lxd/containers/jp-pss/rootfs" on "/var/snap/lxd/common/lxc/"
lxc jp-pss 20200427101700.285 ERROR    conf - conf.c:lxc_mount_rootfs:1256 - Failed to mount rootfs "/var/snap/lxd/common/lxd/containers/jp-pss/rootfs" onto "/var/snap/lxd/common/lxc/" with options "(null)"
lxc jp-pss 20200427101700.285 ERROR    conf - conf.c:lxc_setup_rootfs_prepare_root:3178 - Failed to setup rootfs for
lxc jp-pss 20200427101700.285 ERROR    conf - conf.c:lxc_setup:3277 - Failed to setup rootfs
lxc jp-pss 20200427101700.285 ERROR    start - start.c:do_start:1231 - Failed to setup container "jp-pss"
lxc jp-pss 20200427101700.287 ERROR    sync - sync.c:__sync_wait:41 - An error occurred in another process (expected sequence number 5)
lxc jp-pss 20200427101700.295 WARN     network - network.c:lxc_delete_network_priv:3213 - Failed to rename interface with index 0 from "eth0" to its initial name "veth9a338ad4"
lxc jp-pss 20200427101700.295 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:852 - Received container state "ABORTING" instead of "RUNNING"
lxc jp-pss 20200427101700.295 ERROR    start - start.c:__lxc_start:1952 - Failed to spawn container "jp-pss"
lxc jp-pss 20200427101700.295 WARN     start - start.c:lxc_abort:1025 - No such process - Failed to send SIGKILL via pidfd 30 for process 29151
lxc 20200427101700.488 WARN     commands - commands.c:lxc_cmd_rsp_recv:122 - Connection reset by peer - Failed to receive response for command "get_state"

lxc storage list
lxc storage volume list default

Thank You @stgraber for answer …
These first two containers starts ok

root@jp-laptop:~# lxc storage list
+---------+-------------+--------+---------+---------+
|  NAME   | DESCRIPTION | DRIVER | SOURCE  | USED BY |
+---------+-------------+--------+---------+---------+
| default |             | zfs    | default | 7       |
+---------+-------------+--------+---------+---------+
root@jp-laptop:~# lxc storage volume list default
+----------------------+------------------------------------------------------------------+-------------+---------+
|         TYPE         |                               NAME                               | DESCRIPTION | USED BY |
+----------------------+------------------------------------------------------------------+-------------+---------+
| container            | gui1804                                                          |             | 1       |
+----------------------+------------------------------------------------------------------+-------------+---------+
| container            | j2004                                                            |             | 1       |
+----------------------+------------------------------------------------------------------+-------------+---------+
| container            | jp-pss                                                           |             | 1       |
+----------------------+------------------------------------------------------------------+-------------+---------+
| container            | jp-u1804                                                         |             | 1       |
+----------------------+------------------------------------------------------------------+-------------+---------+
| container (snapshot) | jp-u1804/snapsh-po_qhm                                           |             | 1       |
+----------------------+------------------------------------------------------------------+-------------+---------+
| image                | f71e76edd33548e6c898fad9778997f84d494ad779761ce34826c41031c151fc |             | 1       |
+----------------------+------------------------------------------------------------------+-------------+---------+

Can you show:

  • zfs list -t all (apt install --no-install-recommends zfsutils-linux if not present on your system)
  • ls -lh /var/snap/lxd/common/lxd/containers/
  • ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/

I’ve checked it before and it looked ok, in my opinion :grin:
Here You are:

root@jp-laptop:~# zfs list -t all
NAME                                                                                               USED  AVAIL     REFER  MOUNTPOINT
default                                                                                           15,0G  92,5G       96K  none
default/containers                                                                                8,20G  92,5G       96K  none
default/containers/gui1804                                                                        3,55G  92,5G     3,75G  /var/snap/lxd/common/lxd/storage-pools/default/containers/gui1804
default/containers/j2004                                                                           102M  92,5G      284M  /var/snap/lxd/common/lxd/storage-pools/default/containers/j2004
default/containers/jp-pss                                                                   92K  92,5G       92K  /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-pss
default/containers/jp-u1804                                                      4,55G  92,5G      100K  /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804
default/containers/jp-u1804@snapshot-snapsh-po_qhm                               4,55G      -     4,92G  -
default/custom                                                                                      96K  92,5G       96K  none
default/deleted                                                                                   1,06G  92,5G       96K  none
default/deleted/containers                                                                          96K  92,5G       96K  none
default/deleted/custom                                                                              96K  92,5G       96K  none
default/deleted/images                                                                            1,06G  92,5G       96K  none
default/deleted/images/119fc8bbd1876b4ec6cb42c88ba23be47e4232bea6759a0e6ded1cc335f73b10            242M  92,5G      242M  /var/snap/lxd/common/lxd/storage-pools/default/images/119fc8bbd1876b4ec6cb42c88ba23be47e4232bea6759a0e6ded1cc335f73b10
default/deleted/images/119fc8bbd1876b4ec6cb42c88ba23be47e4232bea6759a0e6ded1cc335f73b10@readonly     8K      -      242M  -
default/deleted/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0            420M  92,5G      420M  none
default/deleted/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0@readonly     0B      -      420M  -
default/deleted/images/5b72cf46f628b3d60f5d99af48633539b2916993c80fc5a2323d7d841f66afbe            427M  92,5G      427M  none
default/deleted/images/5b72cf46f628b3d60f5d99af48633539b2916993c80fc5a2323d7d841f66afbe@readonly     0B      -      427M  -
default/deleted/virtual-machines                                                                    96K  92,5G       96K  none
default/images                                                                                    5,72G  92,5G       96K  none
default/images/751bac27ad889050cfbbde624fb736ddaa571ed2f507cfda9caa8cfd4ee866d3                    242M  92,5G      242M  /var/snap/lxd/common/lxd/storage-pools/default/images/751bac27ad889050cfbbde624fb736ddaa571ed2f507cfda9caa8cfd4ee866d3
default/images/f71e76edd33548e6c898fad9778997f84d494ad779761ce34826c41031c151fc                   5,48G  92,5G     5,48G  /var/snap/lxd/common/lxd/storage-pools/default/images/f71e76edd33548e6c898fad9778997f84d494ad779761ce34826c41031c151fc
default/images/f71e76edd33548e6c898fad9778997f84d494ad779761ce34826c41031c151fc@readonly             0B      -     5,48G  -
default/snapshots                                                                                  192K  92,5G       96K  none
default/snapshots/jp-u1804                                                         96K  92,5G       96K  none
default/virtual-machines                                                                            96K  92,5G       96K  none
root@jp-laptop:~# ls -lh /var/snap/lxd/common/lxd/containers/
razem 16K
lrwxrwxrwx 1 root root 65 lip 23  2019 gui1804 -> /var/snap/lxd/common/lxd/storage-pools/default/containers/gui1804
lrwxrwxrwx 1 root root 63 kwi 21 17:00 j2004 -> /var/snap/lxd/common/lxd/storage-pools/default/containers/j2004
lrwxrwxrwx 1 root root 72 sty 10 09:52 jp-pss -> /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-pss
lrwxrwxrwx 1 root root 83 maj  8  2019 jp-u1804 -> /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804
root@jp-laptop:~# ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/
razem 16K
d--x------ 2 root root 4,0K lip 23  2019 gui1804
d--x------ 2 root root 4,0K kwi 21 17:00 j2004
d--x------ 2 root root 4,0K sty 10 09:52 jp-pss
d--x------ 2 root root 4,0K maj  8  2019 jp-u1804
root@jp-laptop:~# 

Yeah, look good, can you show nsenter --mount=/run/snapd/ns/lxd.mnt cat /proc/self/mountinfo?

Yeah …
There are only two rows connect with containers which starts

root@jp-laptop:~# nsenter --mount=/run/snapd/ns/lxd.mnt cat /proc/self/mountinfo | grep -e gui -e 2004 -e jp
985 1676 0:128 / /var/snap/lxd/common/lxd/storage-pools/default/containers/gui1804 rw shared:534 - zfs default/containers/gui1804 rw,xattr,posixacl
4843 1676 0:141 / /var/snap/lxd/common/lxd/storage-pools/default/containers/j2004 rw shared:604 - zfs default/containers/j2004 rw,xattr,posixacl

Do You need whole information?

No, that gives us enough information I think.

Can you do grep jp-u1804 /proc/*/mountinfo?

Sure!
That’s empty

root@jp-laptop:~# grep jp-u1804 /proc/*/mountinfo
root@jp-laptop:~# 

But I see, that it is the list of runnig processes in containers, when I’ve stoppped two running containers it is empty too

Ok, can you show modinfo zfs?

With that I should be able to get you some specific commands to see what’s going on for those two containers.

root@jp-laptop:/home/jack# modinfo zfs
filename:       /lib/modules/5.4.0-26-generic/kernel/zfs/zfs.ko
version:        0.8.3-1ubuntu12
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     4846FE465C7D89EAF09E22A
depends:        zlua,spl,znvpair,zcommon,icp,zunicode,zavl
retpoline:      Y
name:           zfs
vermagic:       5.4.0-26-generic SMP mod_unload 
sig_id:         PKCS#7
signer:         Build time autogenerated kernel key
sig_key:        2E:1C:6B:CE:DF:4D:6E:F0:5B:25:79:E8:B6:0E:F2:9A:9A:01:CB:AF
sig_hashalgo:   sha512
signature:      0F:17:68:50:D8:A5:2E:F9:E6:B8:9D:E0:BB:CE:FA:5B:23:D1:AD:23:
		1D:AA:86:89:D5:AF:08:1B:03:30:BA:48:D4:A8:A0:1A:E0:89:6D:54:
		9C:3E:4B:41:C8:07:74:3D:B9:F5:D7:12:F4:F5:18:5C:A9:69:3F:66:
		9A:AE:6A:17:CE:CE:A5:3B:4F:C2:02:BA:EA:2B:14:CB:BC:01:E9:BB:
		7C:29:3F:2A:0B:BC:EC:8C:EE:51:BA:1F:D1:C0:34:67:64:19:0E:E9:
		F4:07:53:E4:84:8B:AF:9A:CA:8D:86:9C:28:A2:A6:40:61:CE:D5:33:
		87:84:53:B8:B7:F7:C6:65:2E:95:BD:59:ED:25:13:85:6B:72:B8:F9:
		75:7A:7E:AE:A8:44:64:3E:CF:76:07:34:A1:93:61:EA:3E:94:97:29:
		C2:46:6C:C4:60:98:66:D8:AC:D1:37:43:C7:84:AA:4E:F5:E1:06:39:
		58:A6:59:57:2D:4F:0F:83:42:45:7B:FB:14:D9:BE:C0:27:D9:E8:61:
		B5:7E:36:E5:E4:72:4A:48:2E:64:F4:42:C9:7F:15:75:C9:B2:DD:34:
		E9:C1:07:83:7B:9A:2E:8B:48:50:B1:32:0D:CF:D6:29:A7:6A:43:E7:
		80:03:DA:62:7B:06:50:57:3B:9B:12:40:0D:67:CB:59:AF:EC:B6:7B:
		FA:D4:62:F7:D3:FF:EF:CA:01:C2:DF:41:26:A6:C4:7B:43:79:B4:09:
		10:FC:33:12:B1:C6:03:A4:27:DB:E3:62:24:B0:05:7C:76:6A:FD:F9:
		53:6D:66:F4:EF:AD:78:A1:E0:2D:2C:AE:3A:85:D5:E4:2E:13:CF:F6:
		AE:0A:B0:40:09:54:B0:E5:21:BD:B1:26:13:39:31:5C:FC:3B:B6:83:
		DF:0C:92:4A:12:89:20:22:B6:86:DB:1D:DC:9A:33:3B:78:B4:23:6A:
		B6:B0:63:34:49:79:6C:0F:B1:59:D4:40:BD:C5:D3:8C:78:31:82:5A:
		DB:84:79:46:75:7E:C5:BD:48:BC:BD:68:E3:B3:6D:02:4A:3D:63:95:
		44:CB:66:EB:B2:80:4E:92:14:20:50:C4:AD:ED:1B:39:68:83:EF:F3:
		C7:FF:B8:5E:30:DF:96:C6:5F:8C:29:48:91:9E:D6:30:55:6D:3B:4E:
		97:88:79:BD:D4:84:DE:19:4F:26:FB:15:D0:76:32:C8:0E:66:0A:2D:
		43:50:D8:EF:EA:FA:87:54:31:72:CA:34:95:46:59:09:56:9F:59:06:
		2E:AC:5C:40:B3:2A:A4:7B:F5:04:DD:84:C0:36:C0:42:2F:E5:90:78:
		32:39:3B:F7:F1:87:37:99:F2:6E:88:D2
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
parm:           zvol_threads:Max number of threads to handle I/O requests (uint)
parm:           zvol_request_sync:Synchronously handle bio requests (uint)
parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm:           zvol_volmode:Default volmode property value (uint)
parm:           zfs_key_max_salt_uses:Max number of times a salt value can be used for generating encryption keys before it is rotated (ulong)
parm:           zio_slow_io_ms:Max I/O completion time (milliseconds) before marking it as slow (int)
parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (int)
parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass (int)
parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass (int)
parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass (int)
parm:           zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline (int)
parm:           zio_deadman_log_all:Log all slow ZIOs, not just those with vdevs (int)
parm:           zfs_commit_timeout_pct:ZIL block open timeout percentage (int)
parm:           zil_replay_disable:Disable intent logging replay (int)
parm:           zil_nocacheflush:Disable ZIL cache flushes (int)
parm:           zil_slog_bulk:Limit in bytes slog sync writes per commit (ulong)
parm:           zil_maxblocksize:Limit in bytes of ZIL log block size (int)
parm:           zfs_object_mutex_size:Size of znode hold array (uint)
parm:           zfs_unlink_suspend_progress:Set to prevent async unlinks (debug - leaks space into the unlinked set) (int)
parm:           zfs_delete_blocks:Delete files larger than N blocks async (ulong)
parm:           zfs_read_chunk_size:Bytes to read per chunk (ulong)
parm:           zfs_immediate_write_sz:Largest data block to write to zil (long)
parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (int)
parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm:           zfs_lua_max_instrlimit:Max instruction limit that can be specified for a channel program (ulong)
parm:           zfs_lua_max_memlimit:Max memory limit that can be specified for a channel program (ulong)
parm:           zap_iterate_prefetch:When iterating ZAP object, prefetch it (int)
parm:           zfs_trim_extent_bytes_max:Max size of TRIM commands, larger will be split (uint)
parm:           zfs_trim_extent_bytes_min:Min size of TRIM commands, smaller will be skipped (uint)
parm:           zfs_trim_metaslab_skip:Skip metaslabs which have never been initialized (uint)
parm:           zfs_trim_txg_batch:Min number of txgs to aggregate frees before issuing TRIM (uint)
parm:           zfs_trim_queue_limit:Max queued TRIMs outstanding per leaf vdev (uint)
parm:           zfs_removal_ignore_errors:Ignore hard IO errors when removing device (int)
parm:           zfs_remove_max_segment:Largest contiguous segment to allocate when removing device (int)
parm:           vdev_removal_max_span:Largest span of free chunks a remap segment can span (int)
parm:           zfs_removal_suspend_progress:Pause device removal after this many bytes are copied (debug use only - causes removal to hang) (int)
parm:           zfs_vdev_raidz_impl:Select raidz implementation.
parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int)
parm:           zfs_vdev_aggregation_limit_non_rotating:Max vdev I/O aggregation size for non-rotating media (int)
parm:           zfs_vdev_aggregate_trim:Allow TRIM I/O to be aggregated (int)
parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int)
parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int)
parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev (int)
parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold (int)
parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold (int)
parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev (int)
parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev (int)
parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev (int)
parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev (int)
parm:           zfs_vdev_initializing_max_active:Max active initializing I/Os per vdev (int)
parm:           zfs_vdev_initializing_min_active:Min active initializing I/Os per vdev (int)
parm:           zfs_vdev_removal_max_active:Max active removal I/Os per vdev (int)
parm:           zfs_vdev_removal_min_active:Min active removal I/Os per vdev (int)
parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev (int)
parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev (int)
parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev (int)
parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev (int)
parm:           zfs_vdev_trim_max_active:Max active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_trim_min_active:Min active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev (int)
parm:           zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment (int)
parm:           zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/O's (int)
parm:           zfs_initialize_value:Value written during zpool initialize (ulong)
parm:           zfs_condense_indirect_vdevs_enable:Whether to attempt condensing indirect vdev mappings (int)
parm:           zfs_condense_min_mapping_bytes:Minimum size of vdev mapping to condense (ulong)
parm:           zfs_condense_max_obsolete_bytes:Minimum size obsolete spacemap to attempt condensing (ulong)
parm:           zfs_condense_indirect_commit_entry_delay_ms:Delay while condensing vdev mapping (int)
parm:           zfs_reconstruct_indirect_combinations_max:Maximum number of combinations when reconstructing split segments (int)
parm:           zfs_vdev_scheduler:I/O scheduler
parm:           zfs_vdev_cache_max:Inflate reads small than max (int)
parm:           zfs_vdev_cache_size:Total size of the per-disk cache (int)
parm:           zfs_vdev_cache_bshift:Shift size to inflate reads too (int)
parm:           zfs_vdev_default_ms_count:Target number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_min_ms_count:Minimum number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_ms_count_limit:Practical upper limit of total metaslabs per top-level vdev (int)
parm:           zfs_slow_io_events_per_second:Rate limit slow IO (delay) events to this many per second (uint)
parm:           zfs_checksum_events_per_second:Rate limit checksum events to this many checksum errors per second (do not set below zedthreshold). (uint)
parm:           zfs_scan_ignore_errors:Ignore errors during resilver/scrub (int)
parm:           vdev_validate_skip:Bypass vdev_validate() (int)
parm:           zfs_nocacheflush:Disable cache flushes (int)
parm:           zfs_txg_timeout:Max seconds worth of delta per txg (int)
parm:           zfs_read_history:Historical statistics for the last N reads (int)
parm:           zfs_read_history_hits:Include cache hits in read history (int)
parm:           zfs_txg_history:Historical statistics for the last N txgs (int)
parm:           zfs_multihost_history:Historical statistics for last N multihost writes (int)
parm:           zfs_flags:Set additional debugging flags (uint)
parm:           zfs_recover:Set to attempt to recover from fatal errors (int)
parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space (int)
parm:           zfs_deadman_synctime_ms:Pool sync expiration time in milliseconds
parm:           zfs_deadman_ziotime_ms:IO expiration time in milliseconds
parm:           zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds (ulong)
parm:           zfs_deadman_enabled:Enable deadman timer (int)
parm:           zfs_deadman_failmode:Failmode for deadman timer
parm:           spa_asize_inflation:SPA size estimate multiplication factor (int)
parm:           spa_slop_shift:Reserved free space in pool
parm:           zfs_ddt_data_is_special:Place DDT data into the special class (int)
parm:           zfs_user_indirect_is_special:Place user data indirect blocks into the special class (int)
parm:           zfs_special_class_metadata_reserve_pct:Small file blocks in special vdevs depends on this much free space available (int)
parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp)
parm:           zfs_autoimport_disable:Disable pool import at module load (int)
parm:           zfs_spa_discard_memory_limit:Maximum memory for prefetching checkpoint space map per top-level vdev while discarding checkpoint (ulong)
parm:           spa_load_verify_shift:log2(fraction of arc that can be used by inflight I/Os when verifying pool during import (int)
parm:           spa_load_verify_metadata:Set to traverse metadata on pool import (int)
parm:           spa_load_verify_data:Set to traverse data on pool import (int)
parm:           spa_load_print_vdev_tree:Print vdev tree to zfs_dbgmsg during pool import (int)
parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread (uint)
parm:           zfs_max_missing_tvds:Allow importing pool with up to this number of missing top-level vdevs (in read-only mode) (ulong)
parm:           zfs_multilist_num_sublists:Number of sublists used in each multilist (int)
parm:           zfs_multihost_fail_intervals:Max allowed period without a successful mmp write (uint)
parm:           zfs_multihost_interval:Milliseconds between mmp writes to each leaf
parm:           zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity (uint)
parm:           metaslab_aliquot:allocation granularity (a.k.a. stripe size) (ulong)
parm:           metaslab_debug_load:load all metaslabs when pool is first opened (int)
parm:           metaslab_debug_unload:prevent metaslabs from being unloaded (int)
parm:           metaslab_preload_enabled:preload potential metaslabs during reassessment (int)
parm:           zfs_mg_noalloc_threshold:percentage of free space for metaslab group to allow allocation (int)
parm:           zfs_mg_fragmentation_threshold:fragmentation for metaslab group to allow allocation (int)
parm:           zfs_metaslab_fragmentation_threshold:fragmentation for metaslab to allow allocation (int)
parm:           metaslab_fragmentation_factor_enabled:use the fragmentation metric to prefer less fragmented metaslabs (int)
parm:           metaslab_lba_weighting_enabled:prefer metaslabs with lower LBAs (int)
parm:           metaslab_bias_enabled:enable metaslab group biasing (int)
parm:           zfs_metaslab_segment_weight_enabled:enable segment-based metaslab selection (int)
parm:           zfs_metaslab_switch_threshold:segment-based metaslab selection maximum buckets before switching (int)
parm:           metaslab_force_ganging:blocks larger than this size are forced to be gang blocks (ulong)
parm:           metaslab_df_max_search:max distance (bytes) to search forward before using size tree (int)
parm:           metaslab_df_use_largest_segment:when looking in size tree, use largest segment instead of exact fit (int)
parm:           zfs_zevent_len_max:Max event queue length (int)
parm:           zfs_zevent_cols:Max event column width (int)
parm:           zfs_zevent_console:Log events to the console (int)
parm:           zfs_scan_vdev_limit:Max bytes in flight per leaf vdev for scrubs and resilvers (ulong)
parm:           zfs_scrub_min_time_ms:Min millisecs to scrub per txg (int)
parm:           zfs_obsolete_min_time_ms:Min millisecs to obsolete per txg (int)
parm:           zfs_free_min_time_ms:Min millisecs to free per txg (int)
parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int)
parm:           zfs_scan_suspend_progress:Set to prevent scans from progressing (int)
parm:           zfs_no_scrub_io:Set to disable scrub I/O (int)
parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching (int)
parm:           zfs_async_block_max_blocks:Max number of blocks freed in one txg (ulong)
parm:           zfs_free_bpobj_enabled:Enable processing of the free_bpobj (int)
parm:           zfs_scan_mem_lim_fact:Fraction of RAM for scan hard limit (int)
parm:           zfs_scan_issue_strategy:IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size (int)
parm:           zfs_scan_legacy:Scrub using legacy non-sequential method (int)
parm:           zfs_scan_checkpoint_intval:Scan progress on-disk checkpointing interval (int)
parm:           zfs_scan_max_ext_gap:Max gap in bytes between sequential scrub / resilver I/Os (ulong)
parm:           zfs_scan_mem_lim_soft_fact:Fraction of hard limit used as soft limit (int)
parm:           zfs_scan_strict_mem_lim:Tunable to attempt to reduce lock contention (int)
parm:           zfs_scan_fill_weight:Tunable to adjust bias towards more filled segments during scans (int)
parm:           zfs_resilver_disable_defer:Process all resilvers immediately (int)
parm:           zfs_dirty_data_max_percent:percent of ram can be dirty (int)
parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM (int)
parm:           zfs_delay_min_dirty_percent:transaction delay threshold (int)
parm:           zfs_dirty_data_max:determines the dirty space limit (ulong)
parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes (ulong)
parm:           zfs_dirty_data_sync_percent:dirty data txg sync threshold as a percentage of zfs_dirty_data_max (int)
parm:           zfs_delay_scale:how quickly delay approaches infinity (ulong)
parm:           zfs_sync_taskq_batch_pct:max percent of CPUs that are used to sync dirty data (int)
parm:           zfs_zil_clean_taskq_nthr_pct:max percent of CPUs that are used per dp_sync_taskq (int)
parm:           zfs_zil_clean_taskq_minalloc:number of taskq entries that are pre-populated (int)
parm:           zfs_zil_clean_taskq_maxalloc:max number of taskq entries that are cached (int)
parm:           zfs_disable_ivset_guid_check:Set to allow raw receives without IVset guids (int)
parm:           zfs_max_recordsize:Max allowed record size (int)
parm:           zfs_prefetch_disable:Disable all ZFS prefetching (int)
parm:           zfetch_max_streams:Max number of streams per zfetch (uint)
parm:           zfetch_min_sec_reap:Min time before stream reclaim (uint)
parm:           zfetch_max_distance:Max bytes to prefetch per stream (default 8MB) (uint)
parm:           zfetch_array_rd_sz:Number of bytes in a array_read (ulong)
parm:           zfs_pd_bytes_max:Max number of bytes to prefetch (int)
parm:           ignore_hole_birth:Alias for send_holes_without_birth_time (int)
parm:           send_holes_without_birth_time:Ignore hole_birth txg for zfs send (int)
parm:           zfs_override_estimate_recordsize:Record size calculation override for zfs send estimates (ulong)
parm:           zfs_send_corrupt_data:Allow sending corrupt data (int)
parm:           zfs_send_queue_length:Maximum send queue length (int)
parm:           zfs_send_unmodified_spill_blocks:Send unmodified spill blocks (int)
parm:           zfs_recv_queue_length:Maximum receive queue length (int)
parm:           dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once (int)
parm:           zfs_nopwrite_enabled:Enable NOP writes (int)
parm:           zfs_per_txg_dirty_frees_percent:percentage of dirtied blocks from frees in one TXG (ulong)
parm:           zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes (int)
parm:           dmu_prefetch_max:Limit one prefetch call to this size (int)
parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int)
parm:           zfs_dbuf_state_index:Calculate arc header index (int)
parm:           dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache. (ulong)
parm:           dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes when dbufs must be evicted directly. (uint)
parm:           dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when the evict thread stops evicting dbufs. (uint)
parm:           dbuf_metadata_cache_max_bytes:Maximum size in bytes of the dbuf metadata cache. (ulong)
parm:           dbuf_metadata_cache_shift:int
parm:           dbuf_cache_shift:Set the size of the dbuf cache to a log2 fraction of arc size. (int)
parm:           zfs_arc_min:Min arc size
parm:           zfs_arc_max:Max arc size
parm:           zfs_arc_meta_limit:Meta limit for arc size
parm:           zfs_arc_meta_limit_percent:Percent of arc size for arc meta limit
parm:           zfs_arc_meta_min:Min arc metadata
parm:           zfs_arc_meta_prune:Meta objects to scan for prune (int)
parm:           zfs_arc_meta_adjust_restarts:Limit number of restarts in arc_adjust_meta (int)
parm:           zfs_arc_meta_strategy:Meta reclaim strategy (int)
parm:           zfs_arc_grow_retry:Seconds before growing arc size
parm:           zfs_arc_p_dampener_disable:disable arc_p adapt dampener (int)
parm:           zfs_arc_shrink_shift:log2(fraction of arc to reclaim)
parm:           zfs_arc_pc_percent:Percent of pagecache to reclaim arc to (uint)
parm:           zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p
parm:           zfs_arc_average_blocksize:Target average block size (int)
parm:           zfs_compressed_arc_enabled:Disable compressed arc buffers (int)
parm:           zfs_arc_min_prefetch_ms:Min life of prefetch block in ms
parm:           zfs_arc_min_prescient_prefetch_ms:Min life of prescient prefetched block in ms (int)
parm:           l2arc_write_max:Max write bytes per interval (ulong)
parm:           l2arc_write_boost:Extra write bytes during device warmup (ulong)
parm:           l2arc_headroom:Number of max device writes to precache (ulong)
parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier (ulong)
parm:           l2arc_feed_secs:Seconds between L2ARC writing (ulong)
parm:           l2arc_feed_min_ms:Min feed interval in milliseconds (ulong)
parm:           l2arc_noprefetch:Skip caching prefetched buffers (int)
parm:           l2arc_feed_again:Turbo L2ARC warmup (int)
parm:           l2arc_norw:No reads during writes (int)
parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes
parm:           zfs_arc_sys_free:System free memory target size in bytes
parm:           zfs_arc_dnode_limit:Minimum bytes of dnodes in arc
parm:           zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes (ulong)
parm:           zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin (ulong)
parm:           zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
parm:           zfs_abd_scatter_min_size:Minimum size of scatter allocations. (int)
parm:           zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)

nsenter --mount=/run/snapd/ns/lxd.mnt env LD_LIBRARY_PATH=/snap/lxd/current/zfs-0.8/lib/ PATH=/snap/lxd/current/zfs-0.8/bin/:${PATH} zfs mount default/containers/jp-u1804
nsenter --mount=/run/snapd/ns/lxd.mnt grep jp-u1804 /proc/self/mountinfo
nsenter --mount=/run/snapd/ns/lxd.mnt ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804

After grep:

5033 2786 0:137 / /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804 rw - zfs default/containers/jp-u1804 rw,xattr,posixacl

ls

nsenter --mount=/run/snapd/ns/lxd.mnt ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804
total 4.5K
-r-------- 1 root root 5.2K Apr 27 12:08 backup.yaml

Hmm, ok, so that’s the issue… The container’s volume is empty.

That explains why LXD isn’t complaining about it failing to mount or anything and it just fails to start instead…

I wonder if it’s a case where the data is somehow stored outside of the container’s dataset? Though that’s pretty unlikely for ZFS…

Can you check:

nsenter --mount=/run/snapd/ns/lxd.mnt umount /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804
nsenter --mount=/run/snapd/ns/lxd.mnt grep jp-u1804 /proc/self/mountinfo
nsenter --mount=/run/snapd/ns/lxd.mnt ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804
nsenter --mount=/run/snapd/ns/lxd.mnt umount /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804/
nsenter --mount=/run/snapd/ns/lxd.mnt grep jp-u1804 /proc/self/mountinfo
nsenter --mount=/run/snapd/ns/lxd.mnt ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/
total 16K
d--x------ 2 root root 4.0K Jul 23  2019 gui1804
d--x------ 2 root root 4.0K Apr 21 17:00 j2004
d--x------ 2 root root 4.0K Jan 10 09:52 jp-pss
d--x------ 2 root root 4.0K May  8  2019 jp-u1804

Sorry, that last one was meant to be:

nsenter --mount=/run/snapd/ns/lxd.mnt ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804/
root@jp-laptop:~# nsenter --mount=/run/snapd/ns/lxd.mnt ls -lh /var/snap/lxd/common/lxd/storage-pools/default/containers/jp-u1804/
total 0

Ok, so by the look of it, those two containers dataset are completely empty…

You mentioned having to do some database mangling to get things online, any idea what caused the database issue in the first place?

Did the system crash during upgrade or did you run out of disk space?

How did you perform the 20.04 upgrade? did you use do-release-upgrade?

You mentioned having to do some database mangling to get things online, any idea what caused the database issue in the first place?

What dou You mean?
I’ve looked for solution in internet so sth happend after:

snap remove lxd
snap install lxd
snap save
snap restore

How did you perform the 20.04 upgrade? did you use do-release-upgrade ?

Yes, I used do-release-upgrade and it freeze on one of packets or it seemed to me be frozen :slight_smile:
So I’ve restarted computer and finished upgrade manually using dpkg and apt

So, I’ve lost my whole containers?

What do you have in ls -lh /var/lib/snapd/snapshots/?

Running snap remove lxd may have caused considerable damage, effectively wiping those containers, but maybe you’re lucky and there’s a usable snapshot through snapd.

One of your two containers also appears to have a snapshot, if that’s reasonably recent, then you can at least restore it to that state.