LXD containers stopped and not re-starting after HOST machine had a reboot

,

We’ve deployed LXD 3.0.4 in an Ubuntu 20.04, and created some containers for our applications.
Everything was working fine, up until the point we restarted the Host OS.

Now, the containers aren’t starting and issue with ZFS pool is shown in the terminal.

Here are some pics, specifying the issues more:

Hi,
The problem looks like an storage problem, so if you want to debug the problem. Please post the
lxc info --show-log aqua-golive and lxc storage list command outputs.
Thanks.

The response for lxc info --show-log aqua-golive is

Name: aqua-golive
Remote: unix://
Architecture: x86_64
Created: 2021/06/27 16:05 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

And, the response for lxc storage list is

+---------+-------------+--------+--------------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |                   SOURCE                   | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| default |             | zfs    | /var/snap/lxd/common/lxd/disks/default.img | 9       |
+---------+-------------+--------+--------------------------------------------+---------+

Thanks

Pardon me, post the output of the lxc start aqua-golive --debug please.

Sure!

The output of lxc start aqua-golive --debug is

DBUG[09-03|00:43:58] Connecting to a local LXD over a Unix socket 
DBUG[09-03|00:43:58] Sending request to LXD                   method=GET url=http://unix.socket/1.0 etag=
DBUG[09-03|00:43:58] Got response struct from LXD 
DBUG[09-03|00:43:58] 
	{
		"config": {},
		"api_extensions": [
			"storage_zfs_remove_snapshots",
			"container_host_shutdown_timeout",
			"container_stop_priority",
			"container_syscall_filtering",
			"auth_pki",
			"container_last_used_at",
			"etag",
			"patch",
			"usb_devices",
			"https_allowed_credentials",
			"image_compression_algorithm",
			"directory_manipulation",
			"container_cpu_time",
			"storage_zfs_use_refquota",
			"storage_lvm_mount_options",
			"network",
			"profile_usedby",
			"container_push",
			"container_exec_recording",
			"certificate_update",
			"container_exec_signal_handling",
			"gpu_devices",
			"container_image_properties",
			"migration_progress",
			"id_map",
			"network_firewall_filtering",
			"network_routes",
			"storage",
			"file_delete",
			"file_append",
			"network_dhcp_expiry",
			"storage_lvm_vg_rename",
			"storage_lvm_thinpool_rename",
			"network_vlan",
			"image_create_aliases",
			"container_stateless_copy",
			"container_only_migration",
			"storage_zfs_clone_copy",
			"unix_device_rename",
			"storage_lvm_use_thinpool",
			"storage_rsync_bwlimit",
			"network_vxlan_interface",
			"storage_btrfs_mount_options",
			"entity_description",
			"image_force_refresh",
			"storage_lvm_lv_resizing",
			"id_map_base",
			"file_symlinks",
			"container_push_target",
			"network_vlan_physical",
			"storage_images_delete",
			"container_edit_metadata",
			"container_snapshot_stateful_migration",
			"storage_driver_ceph",
			"storage_ceph_user_name",
			"resource_limits",
			"storage_volatile_initial_source",
			"storage_ceph_force_osd_reuse",
			"storage_block_filesystem_btrfs",
			"resources",
			"kernel_limits",
			"storage_api_volume_rename",
			"macaroon_authentication",
			"network_sriov",
			"console",
			"restrict_devlxd",
			"migration_pre_copy",
			"infiniband",
			"maas_network",
			"devlxd_events",
			"proxy",
			"network_dhcp_gateway",
			"file_get_symlink",
			"network_leases",
			"unix_device_hotplug",
			"storage_api_local_volume_handling",
			"operation_description",
			"clustering",
			"event_lifecycle",
			"storage_api_remote_volume_handling",
			"nvidia_runtime",
			"candid_authentication",
			"candid_config",
			"candid_config_key",
			"usb_optional_vendorid",
			"id_map_current"
		],
		"api_status": "stable",
		"api_version": "1.0",
		"auth": "trusted",
		"public": false,
		"auth_methods": [
			"tls"
		],
		"environment": {
			"addresses": [],
			"architectures": [
				"x86_64",
				"i686"
			],
			"certificate": "-----BEGIN CERTIFICATE-----\nMIIFZTCCA02gAwIBAgIRAMlH9NNlKLvdI3+u7ijW5WQwDQYJKoZIhvcNAQELBQAw\nQDEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEgMB4GA1UEAwwXcm9vdEBh\ncXVhd29ybGQtSFZNLWRvbVUwHhcNMjEwNjI2MDc1NzE2WhcNMzEwNjI0MDc1NzE2\nWjBAMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMSAwHgYDVQQDDBdyb290\nQGFxdWF3b3JsZC1IVk0tZG9tVTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoC\nggIBANBmVQtJLKTQMfrIc7viViQ2n4lQBfCH1w9Tidm90HsBepdc2mRkOSxfGGK8\nCLR0TyLO6WClpo0Pc5zfR24Otx71ON2CqBEOPTKeQwYqWqyQZtpRudTm6tmHUTMD\nthZ15LZzfHJn+G6XOVlRxOdHxocRaTWxAJYHCCpnIhC4agRG3v2ycPrM9qlikiRm\n5XQtraTGxeM6pXv4BIT+bpKXaGhxStTGFDk0/8qZ+b7B7KCCwnpTc9yI/7G5XhHM\nJgqlYj1tm4TtjoluKRTDRDqq17vs5WqQtbXUhzmKX2gelmfaFNMalq9AlzVaqJfQ\nG3ew+xyIGlbQtS+UXIf+jAXW6RdwuOSR9vM7yCNBaZUR3i9SIbm1hH2DOvLn7zzN\nHxcm2agAWZ80Dzh8U4zHx86od6JHbbnWKOy0XMp8m/vCHxUIFVPamPPMM24ku41L\nFatkN/WlMydjqd1ovR0lvokel4MuMVgcP/vM7q98ZV+I5ariqrT0iktj3DcuJIs4\n9wRb6RZl3Dk9tNXaFQH7Zjav5lykXeZQHOC+A46Ump7GZZr9tMnfFyPTKenExGUk\nx8j1BJbsCg71PcgCqESVQRFaoO017KBldiNhxNCLVtyQcTIZM3dDZJZP+afoIhzP\n2P9D3W3tLUMn6eNu45MK6NDFfofSgtVs0TSlm5pqVql44r9FAgMBAAGjWjBYMA4G\nA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAA\nMCMGA1UdEQQcMBqCEmFxdWF3b3JsZC1IVk0tZG9tVYcErB0BSzANBgkqhkiG9w0B\nAQsFAAOCAgEATvr5raOeS64a0N7qRB1o8s7rxa83kUlKPKynyFcnz5wCX/BEDaWS\nRKGi7X5OdExv7pL0r957JBXxmN8274iyRwaBWzJ4LrKGTcDPlbNyfAP7auztkM0O\nLif/8zkCf1mYSLDQwHnuUK2RnekVf8Ns3YZUNpyqNEMUnMo2fgchtruj4QHIw5Kg\nTOsW6G/mPD12M6Wr7NG2QU4FUrTkqEewjpmtViaGtGTSm7covXMDbW2r5gVhim5C\n2gR7efbY3x5weODTz99DxxEHZtQ/qQKD78clGNOiTw+X+a7uaq5hZGDTdoB8ENzJ\nWPlRup4Vna1DVm2YD3eLd9gvyaQ+ZoyYQwOqTAJbcNKpAs9jxnR5jDWteqXzVRrn\nRO1o/x/AK6+pFUU2GI2q8FoptEjBbvsdkWn1yf5XohJybsQNEPjARrJnX3hSOJdv\nrv5oKEGXmhq73+JD27H6mI7Ex1MDM7myCd28s7nnHV4ltaIQCtoPdhdkdGC5TtvH\nzk1E1lHm99g0sZgJ0fXQMfFNeXntrCOA3bD9jTG5MpFHrc1q128u7As4tgzp1gvI\nzN3Lfk6XOQoZ3IQZNhithlL+gA3qAJLhPzPrLtM5v3drLA/vZWihmTCndPLqMNCB\n5/O2uRDy4Iguy+/oIrtQive63wLmaABpFdfV6gRA0tQ1A7d8suNd+2Y=\n-----END CERTIFICATE-----\n",
			"certificate_fingerprint": "a2a2f2afa3d6bb312feca2349aadf3eb34ec5de6b005237ef10c6fa71b4ff3d9",
			"driver": "lxc",
			"driver_version": "3.0.4",
			"kernel": "Linux",
			"kernel_architecture": "x86_64",
			"kernel_features": null,
			"kernel_version": "5.11.0-27-generic",
			"lxc_features": null,
			"project": "",
			"server": "lxd",
			"server_clustered": false,
			"server_name": "aquaworld-HVM-domU",
			"server_pid": 7569,
			"server_version": "3.0.4",
			"storage": "",
			"storage_version": ""
		}
	} 
DBUG[09-03|00:43:58] Sending request to LXD                   method=GET url=http://unix.socket/1.0/containers/aqua-golive etag=
DBUG[09-03|00:43:58] Got response struct from LXD 
DBUG[09-03|00:43:58] 
	{
		"architecture": "x86_64",
		"config": {
			"image.architecture": "x86_64",
			"image.description": "Ubuntu 18.04 LTS server (20190424)",
			"image.os": "ubuntu",
			"image.release": "bionic",
			"volatile.base_image": "b20f0cac0892cee029e5c65e8a36c7684e0d685bd0b22f839af5fd81a51b5f16",
			"volatile.eth0.hwaddr": "00:16:3e:cb:59:03",
			"volatile.idmap.base": "0",
			"volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.last_state.idmap": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.last_state.power": "STOPPED"
		},
		"devices": {},
		"ephemeral": false,
		"profiles": [
			"default"
		],
		"stateful": false,
		"description": "",
		"created_at": "2021-06-27T21:50:45+05:45",
		"expanded_config": {
			"image.architecture": "x86_64",
			"image.description": "Ubuntu 18.04 LTS server (20190424)",
			"image.os": "ubuntu",
			"image.release": "bionic",
			"volatile.base_image": "b20f0cac0892cee029e5c65e8a36c7684e0d685bd0b22f839af5fd81a51b5f16",
			"volatile.eth0.hwaddr": "00:16:3e:cb:59:03",
			"volatile.idmap.base": "0",
			"volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.last_state.idmap": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.last_state.power": "STOPPED"
		},
		"expanded_devices": {
			"eth0": {
				"name": "eth0",
				"nictype": "bridged",
				"parent": "lxdbr0",
				"type": "nic"
			},
			"root": {
				"path": "/",
				"pool": "default",
				"type": "disk"
			}
		},
		"name": "aqua-golive",
		"status": "Stopped",
		"status_code": 102,
		"last_used_at": "2021-06-27T21:52:26.80536393+05:45",
		"location": ""
	} 
DBUG[09-03|00:43:58] Connected to the websocket 
DBUG[09-03|00:43:58] Sending request to LXD                   method=PUT url=http://unix.socket/1.0/containers/aqua-golive/state etag=
DBUG[09-03|00:43:58] 
	{
		"action": "start",
		"timeout": 0,
		"force": false,
		"stateful": false
	} 
DBUG[09-03|00:43:58] Got operation from LXD 
DBUG[09-03|00:43:58] 
	{
		"id": "671fdc50-cfd9-4b7c-a389-59ac47f70dc2",
		"class": "task",
		"description": "Starting container",
		"created_at": "2021-09-03T00:43:58.659531156+05:45",
		"updated_at": "2021-09-03T00:43:58.659531156+05:45",
		"status": "Running",
		"status_code": 103,
		"resources": {
			"containers": [
				"/1.0/containers/aqua-golive"
			]
		},
		"metadata": null,
		"may_cancel": false,
		"err": "",
		"location": "none"
	} 
DBUG[09-03|00:43:58] Sending request to LXD                   method=GET url=http://unix.socket/1.0/operations/671fdc50-cfd9-4b7c-a389-59ac47f70dc2 etag=
DBUG[09-03|00:43:58] Got response struct from LXD 
DBUG[09-03|00:43:58] 
	{
		"id": "671fdc50-cfd9-4b7c-a389-59ac47f70dc2",
		"class": "task",
		"description": "Starting container",
		"created_at": "2021-09-03T00:43:58.659531156+05:45",
		"updated_at": "2021-09-03T00:43:58.659531156+05:45",
		"status": "Running",
		"status_code": 103,
		"resources": {
			"containers": [
				"/1.0/containers/aqua-golive"
			]
		},
		"metadata": null,
		"may_cancel": false,
		"err": "",
		"location": "none"
	} 
Error: Common start logic: The "zfs" tool is not enabled
Try `lxc info --show-log aqua-golive` for more info

I suppose when host was rebooted and zfs module somehow not loaded.
What is the output of the modinfo zfs?
You can execute modprobe zfs if not exists.

The output of modinfo zfs is

filename:       /lib/modules/5.11.0-27-generic/kernel/zfs/zfs.ko
version:        2.0.2-1ubuntu5
license:        CDDL
author:         OpenZFS
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     F267DF7B3FFB43AFE76257D
depends:        spl,znvpair,icp,zlua,zzstd,zunicode,zcommon,zavl
retpoline:      Y
name:           zfs
vermagic:       5.11.0-27-generic SMP mod_unload modversions 
sig_id:         PKCS#7
signer:         Build time autogenerated kernel key
sig_key:        52:8B:83:8F:27:8C:CF:56:B3:20:46:06:E7:B4:DC:30:4E:FC:7A:6A
sig_hashalgo:   sha512
signature:      AF:2F:D6:FF:7F:38:70:07:E7:23:39:0F:2F:19:2D:AC:0E:75:19:50:
		F6:3D:8D:DE:2E:AA:5D:10:24:30:98:5B:6E:1A:99:F3:7C:5F:AA:66:
		A4:81:27:06:01:98:64:97:FB:0A:BB:4A:8A:A6:0A:AE:7C:37:E9:2C:
		83:2B:5F:73:7D:B3:34:29:FC:BA:52:D2:45:C6:4B:56:63:79:46:61:
		A6:AD:61:93:C8:9D:A0:1F:07:94:B6:EA:0B:27:E5:2D:B4:26:56:1C:
		7C:28:71:1A:1C:BA:1E:B2:D2:34:F8:90:B4:7F:48:BF:00:99:03:8B:
		DE:BB:78:FF:84:03:42:CC:29:96:76:52:13:B8:A2:64:F2:FA:B1:07:
		E4:6A:34:2A:89:F6:A3:0B:8B:15:CF:5E:22:7F:85:93:11:09:7E:01:
		3E:01:10:1D:DF:97:E1:53:FC:33:09:90:24:CB:DA:29:B2:F9:4C:92:
		08:3E:29:65:0E:D6:1A:FF:8B:F8:68:34:E6:47:54:D5:99:29:FF:9A:
		BE:0B:0A:9F:6E:9E:CF:2B:F1:D8:CC:30:B2:AA:F7:26:1F:D1:37:17:
		0C:9E:02:A6:73:44:56:AB:19:C1:06:0D:9F:6A:16:81:A0:C0:D0:19:
		83:E8:B3:97:D5:3E:E6:75:2C:58:B0:EA:9F:D6:95:CF:3A:07:0E:AC:
		F6:45:CE:A2:8E:C6:10:BB:C0:9A:DD:F6:A8:90:E4:A6:5D:B8:88:05:
		C7:2E:90:E1:E5:D2:D2:AE:47:E5:A2:72:FD:81:F3:C6:39:95:2E:4A:
		27:D8:46:B3:BE:DF:23:2D:FA:83:D8:D6:F4:F8:EA:8C:AA:FF:FA:81:
		DF:19:C1:42:3F:13:0E:22:6E:D7:9E:C9:69:CA:C2:B5:81:17:09:3E:
		0B:5C:42:3D:AA:70:A4:F1:A5:15:B9:52:BC:38:93:D1:51:FA:9A:C4:
		98:92:12:DD:8B:09:22:1C:CB:96:0C:FC:9A:A9:AB:B9:CE:42:D9:62:
		FD:87:4F:C0:B7:9B:D7:2C:34:E9:53:8A:A9:88:D2:4B:11:66:20:DF:
		EF:57:EC:8C:AA:7D:4F:4C:25:09:DF:11:4C:05:9B:F3:C6:10:07:15:
		AA:90:47:D4:29:E2:4C:D9:7C:75:DA:89:72:7B:AA:9B:4A:CB:CB:C0:
		A6:F6:96:00:BA:02:DE:BB:C7:46:EE:64:B5:21:DA:FF:70:0C:9B:1A:
		3D:28:CE:8B:93:4C:0C:AB:A1:AB:0B:91:02:68:FE:DB:7B:79:9F:39:
		04:63:AB:B6:04:78:DC:9E:FC:1B:CC:C9:17:48:7A:0B:01:EF:6E:FD:
		C7:40:2E:0C:D0:7B:13:90:0D:D9:6B:45
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
parm:           zvol_threads:Max number of threads to handle I/O requests (uint)
parm:           zvol_request_sync:Synchronously handle bio requests (uint)
parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm:           zvol_volmode:Default volmode property value (uint)
parm:           zfs_fallocate_reserve_percent:Percentage of length to use for the available capacity check (uint)
parm:           zfs_key_max_salt_uses:Max number of times a salt value can be used for generating encryption keys before it is rotated (ulong)
parm:           zfs_object_mutex_size:Size of znode hold array (uint)
parm:           zfs_unlink_suspend_progress:Set to prevent async unlinks (debug - leaks space into the unlinked set) (int)
parm:           zfs_delete_blocks:Delete files larger than N blocks async (ulong)
parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (int)
parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm:           vdev_file_logical_ashift:Logical ashift for file-based devices (ulong)
parm:           vdev_file_physical_ashift:Physical ashift for file-based devices (ulong)
parm:           zfs_vdev_scheduler:I/O scheduler
parm:           zfs_arc_shrinker_limit:Limit on number of pages that ARC shrinker can reclaim at once (int)
parm:           zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
parm:           zfs_abd_scatter_min_size:Minimum size of scatter allocations. (int)
parm:           zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)
parm:           zio_slow_io_ms:Max I/O completion time (milliseconds) before marking it as slow (int)
parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (int)
parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass (int)
parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass (int)
parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass (int)
parm:           zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline (int)
parm:           zio_deadman_log_all:Log all slow ZIOs, not just those with vdevs (int)
parm:           zfs_commit_timeout_pct:ZIL block open timeout percentage (int)
parm:           zil_replay_disable:Disable intent logging replay (int)
parm:           zil_nocacheflush:Disable ZIL cache flushes (int)
parm:           zil_slog_bulk:Limit in bytes slog sync writes per commit (ulong)
parm:           zil_maxblocksize:Limit in bytes of ZIL log block size (int)
parm:           zfs_vnops_read_chunk_size:Bytes to read per chunk (ulong)
parm:           zfs_immediate_write_sz:Largest data block to write to zil (long)
parm:           zfs_max_nvlist_src_size:Maximum size in bytes allowed for src nvlist passed with ZFS ioctls (ulong)
parm:           zfs_history_output_max:Maximum size in bytes of ZFS ioctl output that will be logged (ulong)
parm:           zfs_zevent_retain_max:Maximum recent zevents records to retain for duplicate checking (uint)
parm:           zfs_zevent_retain_expire_secs:Expiration time for recent zevents records (uint)
parm:           zfs_lua_max_instrlimit:Max instruction limit that can be specified for a channel program (ulong)
parm:           zfs_lua_max_memlimit:Max memory limit that can be specified for a channel program (ulong)
parm:           zap_iterate_prefetch:When iterating ZAP object, prefetch it (int)
parm:           zfs_trim_extent_bytes_max:Max size of TRIM commands, larger will be split (uint)
parm:           zfs_trim_extent_bytes_min:Min size of TRIM commands, smaller will be skipped (uint)
parm:           zfs_trim_metaslab_skip:Skip metaslabs which have never been initialized (uint)
parm:           zfs_trim_txg_batch:Min number of txgs to aggregate frees before issuing TRIM (uint)
parm:           zfs_trim_queue_limit:Max queued TRIMs outstanding per leaf vdev (uint)
parm:           zfs_removal_ignore_errors:Ignore hard IO errors when removing device (int)
parm:           zfs_remove_max_segment:Largest contiguous segment to allocate when removing device (int)
parm:           vdev_removal_max_span:Largest span of free chunks a remap segment can span (int)
parm:           zfs_removal_suspend_progress:Pause device removal after this many bytes are copied (debug use only - causes removal to hang) (int)
parm:           zfs_rebuild_max_segment:Max segment size in bytes of rebuild reads (ulong)
parm:           zfs_vdev_raidz_impl:Select raidz implementation.
parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int)
parm:           zfs_vdev_aggregation_limit_non_rotating:Max vdev I/O aggregation size for non-rotating media (int)
parm:           zfs_vdev_aggregate_trim:Allow TRIM I/O to be aggregated (int)
parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int)
parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int)
parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev (int)
parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold (int)
parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold (int)
parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev (int)
parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev (int)
parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev (int)
parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev (int)
parm:           zfs_vdev_initializing_max_active:Max active initializing I/Os per vdev (int)
parm:           zfs_vdev_initializing_min_active:Min active initializing I/Os per vdev (int)
parm:           zfs_vdev_removal_max_active:Max active removal I/Os per vdev (int)
parm:           zfs_vdev_removal_min_active:Min active removal I/Os per vdev (int)
parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev (int)
parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev (int)
parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev (int)
parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev (int)
parm:           zfs_vdev_trim_max_active:Max active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_trim_min_active:Min active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_rebuild_max_active:Max active rebuild I/Os per vdev (int)
parm:           zfs_vdev_rebuild_min_active:Min active rebuild I/Os per vdev (int)
parm:           zfs_vdev_nia_credit:Number of non-interactive I/Os to allow in sequence (int)
parm:           zfs_vdev_nia_delay:Number of non-interactive I/Os before _max_active (int)
parm:           zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev (int)
parm:           zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment (int)
parm:           zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/O's (int)
parm:           zfs_initialize_value:Value written during zpool initialize (ulong)
parm:           zfs_initialize_chunk_size:Size in bytes of writes by zpool initialize (ulong)
parm:           zfs_condense_indirect_vdevs_enable:Whether to attempt condensing indirect vdev mappings (int)
parm:           zfs_condense_min_mapping_bytes:Don't bother condensing if the mapping uses less than this amount of memory (ulong)
parm:           zfs_condense_max_obsolete_bytes:Minimum size obsolete spacemap to attempt condensing (ulong)
parm:           zfs_condense_indirect_commit_entry_delay_ms:Used by tests to ensure certain actions happen in the middle of a condense. A maximum value of 1 should be sufficient. (int)
parm:           zfs_reconstruct_indirect_combinations_max:Maximum number of combinations when reconstructing split segments (int)
parm:           zfs_vdev_cache_max:Inflate reads small than max (int)
parm:           zfs_vdev_cache_size:Total size of the per-disk cache (int)
parm:           zfs_vdev_cache_bshift:Shift size to inflate reads too (int)
parm:           zfs_vdev_default_ms_count:Target number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_default_ms_shift:Default limit for metaslab size (int)
parm:           zfs_vdev_min_ms_count:Minimum number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_ms_count_limit:Practical upper limit of total metaslabs per top-level vdev (int)
parm:           zfs_slow_io_events_per_second:Rate limit slow IO (delay) events to this many per second (uint)
parm:           zfs_checksum_events_per_second:Rate limit checksum events to this many checksum errors per second (do not set below zed threshold). (uint)
parm:           zfs_scan_ignore_errors:Ignore errors during resilver/scrub (int)
parm:           vdev_validate_skip:Bypass vdev_validate() (int)
parm:           zfs_nocacheflush:Disable cache flushes (int)
parm:           zfs_vdev_min_auto_ashift:Minimum ashift used when creating new top-level vdevs
parm:           zfs_vdev_max_auto_ashift:Maximum ashift used when optimizing for logical -> physical sector size on new top-level vdevs
parm:           zfs_txg_timeout:Max seconds worth of delta per txg (int)
parm:           zfs_read_history:Historical statistics for the last N reads (int)
parm:           zfs_read_history_hits:Include cache hits in read history (int)
parm:           zfs_txg_history:Historical statistics for the last N txgs (int)
parm:           zfs_multihost_history:Historical statistics for last N multihost writes (int)
parm:           zfs_flags:Set additional debugging flags (uint)
parm:           zfs_recover:Set to attempt to recover from fatal errors (int)
parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space (int)
parm:           zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds (ulong)
parm:           zfs_deadman_enabled:Enable deadman timer (int)
parm:           spa_asize_inflation:SPA size estimate multiplication factor (int)
parm:           zfs_ddt_data_is_special:Place DDT data into the special class (int)
parm:           zfs_user_indirect_is_special:Place user data indirect blocks into the special class (int)
parm:           zfs_deadman_failmode:Failmode for deadman timer
parm:           zfs_deadman_synctime_ms:Pool sync expiration time in milliseconds
parm:           zfs_deadman_ziotime_ms:IO expiration time in milliseconds
parm:           zfs_special_class_metadata_reserve_pct:Small file blocks in special vdevs depends on this much free space available (int)
parm:           spa_slop_shift:Reserved free space in pool
parm:           zfs_unflushed_max_mem_amt:Specific hard-limit in memory that ZFS allows to be used for unflushed changes (ulong)
parm:           zfs_unflushed_max_mem_ppm:Percentage of the overall system memory that ZFS allows to be used for unflushed changes (value is calculated over 1000000 for finer granularity (ulong)
parm:           zfs_unflushed_log_block_max:Hard limit (upper-bound) in the size of the space map log in terms of blocks. (ulong)
parm:           zfs_unflushed_log_block_min:Lower-bound limit for the maximum amount of blocks allowed in log spacemap (see zfs_unflushed_log_block_max) (ulong)
parm:           zfs_unflushed_log_block_pct:Tunable used to determine the number of blocks that can be used for the spacemap log, expressed as a percentage of the total number of metaslabs in the pool (e.g. 400 means the number of log blocks is capped at 4 times the number of metaslabs) (ulong)
parm:           zfs_max_log_walking:The number of past TXGs that the flushing algorithm of the log spacemap feature uses to estimate incoming log blocks (ulong)
parm:           zfs_max_logsm_summary_length:Maximum number of rows allowed in the summary of the spacemap log (ulong)
parm:           zfs_min_metaslabs_to_flush:Minimum number of metaslabs to flush per dirty TXG (ulong)
parm:           zfs_keep_log_spacemaps_at_export:Prevent the log spacemaps from being flushed and destroyed during pool export/destroy (int)
parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp)
parm:           zfs_autoimport_disable:Disable pool import at module load (int)
parm:           zfs_spa_discard_memory_limit:Limit for memory used in prefetching the checkpoint space map done on each vdev while discarding the checkpoint (ulong)
parm:           spa_load_verify_shift:log2(fraction of arc that can be used by inflight I/Os when verifying pool during import (int)
parm:           spa_load_verify_metadata:Set to traverse metadata on pool import (int)
parm:           spa_load_verify_data:Set to traverse data on pool import (int)
parm:           spa_load_print_vdev_tree:Print vdev tree to zfs_dbgmsg during pool import (int)
parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread (uint)
parm:           zfs_max_missing_tvds:Allow importing pool with up to this number of missing top-level vdevs (in read-only mode) (ulong)
parm:           zfs_livelist_condense_zthr_pause:Set the livelist condense zthr to pause (int)
parm:           zfs_livelist_condense_sync_pause:Set the livelist condense synctask to pause (int)
parm:           zfs_livelist_condense_sync_cancel:Whether livelist condensing was canceled in the synctask (int)
parm:           zfs_livelist_condense_zthr_cancel:Whether livelist condensing was canceled in the zthr function (int)
parm:           zfs_livelist_condense_new_alloc:Whether extra ALLOC blkptrs were added to a livelist entry while it was being condensed (int)
parm:           zfs_multilist_num_sublists:Number of sublists used in each multilist (int)
parm:           zfs_multihost_interval:Milliseconds between mmp writes to each leaf
parm:           zfs_multihost_fail_intervals:Max allowed period without a successful mmp write (uint)
parm:           zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity (uint)
parm:           metaslab_aliquot:Allocation granularity (a.k.a. stripe size) (ulong)
parm:           metaslab_debug_load:Load all metaslabs when pool is first opened (int)
parm:           metaslab_debug_unload:Prevent metaslabs from being unloaded (int)
parm:           metaslab_preload_enabled:Preload potential metaslabs during reassessment (int)
parm:           metaslab_unload_delay:Delay in txgs after metaslab was last used before unloading (int)
parm:           metaslab_unload_delay_ms:Delay in milliseconds after metaslab was last used before unloading (int)
parm:           zfs_mg_noalloc_threshold:Percentage of metaslab group size that should be free to make it eligible for allocation (int)
parm:           zfs_mg_fragmentation_threshold:Percentage of metaslab group size that should be considered eligible for allocations unless all metaslab groups within the metaslab class have also crossed this threshold (int)
parm:           zfs_metaslab_fragmentation_threshold:Fragmentation for metaslab to allow allocation (int)
parm:           metaslab_fragmentation_factor_enabled:Use the fragmentation metric to prefer less fragmented metaslabs (int)
parm:           metaslab_lba_weighting_enabled:Prefer metaslabs with lower LBAs (int)
parm:           metaslab_bias_enabled:Enable metaslab group biasing (int)
parm:           zfs_metaslab_segment_weight_enabled:Enable segment-based metaslab selection (int)
parm:           zfs_metaslab_switch_threshold:Segment-based metaslab selection maximum buckets before switching (int)
parm:           metaslab_force_ganging:Blocks larger than this size are forced to be gang blocks (ulong)
parm:           metaslab_df_max_search:Max distance (bytes) to search forward before using size tree (int)
parm:           metaslab_df_use_largest_segment:When looking in size tree, use largest segment instead of exact fit (int)
parm:           zfs_metaslab_max_size_cache_sec:How long to trust the cached max chunk size of a metaslab (ulong)
parm:           zfs_metaslab_mem_limit:Percentage of memory that can be used to store metaslab range trees (int)
parm:           zfs_zevent_len_max:Max event queue length (int)
parm:           zfs_zevent_cols:Max event column width (int)
parm:           zfs_zevent_console:Log events to the console (int)
parm:           zfs_scan_vdev_limit:Max bytes in flight per leaf vdev for scrubs and resilvers (ulong)
parm:           zfs_scrub_min_time_ms:Min millisecs to scrub per txg (int)
parm:           zfs_obsolete_min_time_ms:Min millisecs to obsolete per txg (int)
parm:           zfs_free_min_time_ms:Min millisecs to free per txg (int)
parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int)
parm:           zfs_scan_suspend_progress:Set to prevent scans from progressing (int)
parm:           zfs_no_scrub_io:Set to disable scrub I/O (int)
parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching (int)
parm:           zfs_async_block_max_blocks:Max number of blocks freed in one txg (ulong)
parm:           zfs_max_async_dedup_frees:Max number of dedup blocks freed in one txg (ulong)
parm:           zfs_free_bpobj_enabled:Enable processing of the free_bpobj (int)
parm:           zfs_scan_mem_lim_fact:Fraction of RAM for scan hard limit (int)
parm:           zfs_scan_issue_strategy:IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size (int)
parm:           zfs_scan_legacy:Scrub using legacy non-sequential method (int)
parm:           zfs_scan_checkpoint_intval:Scan progress on-disk checkpointing interval (int)
parm:           zfs_scan_max_ext_gap:Max gap in bytes between sequential scrub / resilver I/Os (ulong)
parm:           zfs_scan_mem_lim_soft_fact:Fraction of hard limit used as soft limit (int)
parm:           zfs_scan_strict_mem_lim:Tunable to attempt to reduce lock contention (int)
parm:           zfs_scan_fill_weight:Tunable to adjust bias towards more filled segments during scans (int)
parm:           zfs_resilver_disable_defer:Process all resilvers immediately (int)
parm:           zfs_dirty_data_max_percent:Max percent of RAM allowed to be dirty (int)
parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM (int)
parm:           zfs_delay_min_dirty_percent:Transaction delay threshold (int)
parm:           zfs_dirty_data_max:Determines the dirty space limit (ulong)
parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes (ulong)
parm:           zfs_dirty_data_sync_percent:Dirty data txg sync threshold as a percentage of zfs_dirty_data_max (int)
parm:           zfs_delay_scale:How quickly delay approaches infinity (ulong)
parm:           zfs_sync_taskq_batch_pct:Max percent of CPUs that are used to sync dirty data (int)
parm:           zfs_zil_clean_taskq_nthr_pct:Max percent of CPUs that are used per dp_sync_taskq (int)
parm:           zfs_zil_clean_taskq_minalloc:Number of taskq entries that are pre-populated (int)
parm:           zfs_zil_clean_taskq_maxalloc:Max number of taskq entries that are cached (int)
parm:           zfs_livelist_max_entries:Size to start the next sub-livelist in a livelist (ulong)
parm:           zfs_livelist_min_percent_shared:Threshold at which livelist is disabled (int)
parm:           zfs_max_recordsize:Max allowed record size (int)
parm:           zfs_allow_redacted_dataset_mount:Allow mounting of redacted datasets (int)
parm:           zfs_disable_ivset_guid_check:Set to allow raw receives without IVset guids (int)
parm:           zfs_prefetch_disable:Disable all ZFS prefetching (int)
parm:           zfetch_max_streams:Max number of streams per zfetch (uint)
parm:           zfetch_min_sec_reap:Min time before stream reclaim (uint)
parm:           zfetch_max_distance:Max bytes to prefetch per stream (uint)
parm:           zfetch_max_idistance:Max bytes to prefetch indirects for per stream (uint)
parm:           zfetch_array_rd_sz:Number of bytes in a array_read (ulong)
parm:           zfs_pd_bytes_max:Max number of bytes to prefetch (int)
parm:           ignore_hole_birth:Alias for send_holes_without_birth_time (int)
parm:           send_holes_without_birth_time:Ignore hole_birth txg for zfs send (int)
parm:           zfs_send_corrupt_data:Allow sending corrupt data (int)
parm:           zfs_send_queue_length:Maximum send queue length (int)
parm:           zfs_send_unmodified_spill_blocks:Send unmodified spill blocks (int)
parm:           zfs_send_no_prefetch_queue_length:Maximum send queue length for non-prefetch queues (int)
parm:           zfs_send_queue_ff:Send queue fill fraction (int)
parm:           zfs_send_no_prefetch_queue_ff:Send queue fill fraction for non-prefetch queues (int)
parm:           zfs_override_estimate_recordsize:Override block size estimate with fixed size (int)
parm:           zfs_recv_queue_length:Maximum receive queue length (int)
parm:           zfs_recv_queue_ff:Receive queue fill fraction (int)
parm:           zfs_recv_write_batch_size:Maximum amount of writes to batch into one transaction (int)
parm:           dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once (int)
parm:           zfs_nopwrite_enabled:Enable NOP writes (int)
parm:           zfs_per_txg_dirty_frees_percent:Percentage of dirtied blocks from frees in one TXG (ulong)
parm:           zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes (int)
parm:           dmu_prefetch_max:Limit one prefetch call to this size (int)
parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int)
parm:           zfs_dbuf_state_index:Calculate arc header index (int)
parm:           dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache. (ulong)
parm:           dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes when dbufs must be evicted directly. (uint)
parm:           dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when the evict thread stops evicting dbufs. (uint)
parm:           dbuf_metadata_cache_max_bytes:Maximum size in bytes of the dbuf metadata cache. (ulong)
parm:           dbuf_cache_shift:Set the size of the dbuf cache to a log2 fraction of arc size. (int)
parm:           dbuf_metadata_cache_shift:Set the size of the dbuf metadata cache to a log2 fraction of arc size. (int)
parm:           zfs_arc_min:Min arc size
parm:           zfs_arc_max:Max arc size
parm:           zfs_arc_meta_limit:Metadata limit for arc size
parm:           zfs_arc_meta_limit_percent:Percent of arc size for arc meta limit
parm:           zfs_arc_meta_min:Min arc metadata
parm:           zfs_arc_meta_prune:Meta objects to scan for prune (int)
parm:           zfs_arc_meta_adjust_restarts:Limit number of restarts in arc_evict_meta (int)
parm:           zfs_arc_meta_strategy:Meta reclaim strategy (int)
parm:           zfs_arc_grow_retry:Seconds before growing arc size
parm:           zfs_arc_p_dampener_disable:Disable arc_p adapt dampener (int)
parm:           zfs_arc_shrink_shift:log2(fraction of arc to reclaim)
parm:           zfs_arc_pc_percent:Percent of pagecache to reclaim arc to (uint)
parm:           zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p
parm:           zfs_arc_average_blocksize:Target average block size (int)
parm:           zfs_compressed_arc_enabled:Disable compressed arc buffers (int)
parm:           zfs_arc_min_prefetch_ms:Min life of prefetch block in ms
parm:           zfs_arc_min_prescient_prefetch_ms:Min life of prescient prefetched block in ms
parm:           l2arc_write_max:Max write bytes per interval (ulong)
parm:           l2arc_write_boost:Extra write bytes during device warmup (ulong)
parm:           l2arc_headroom:Number of max device writes to precache (ulong)
parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier (ulong)
parm:           l2arc_trim_ahead:TRIM ahead L2ARC write size multiplier (ulong)
parm:           l2arc_feed_secs:Seconds between L2ARC writing (ulong)
parm:           l2arc_feed_min_ms:Min feed interval in milliseconds (ulong)
parm:           l2arc_noprefetch:Skip caching prefetched buffers (int)
parm:           l2arc_feed_again:Turbo L2ARC warmup (int)
parm:           l2arc_norw:No reads during writes (int)
parm:           l2arc_meta_percent:Percent of ARC size allowed for L2ARC-only headers (int)
parm:           l2arc_rebuild_enabled:Rebuild the L2ARC when importing a pool (int)
parm:           l2arc_rebuild_blocks_min_l2size:Min size in bytes to write rebuild log blocks in L2ARC (ulong)
parm:           l2arc_mfuonly:Cache only MFU data from ARC into L2ARC (int)
parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes
parm:           zfs_arc_sys_free:System free memory target size in bytes
parm:           zfs_arc_dnode_limit:Minimum bytes of dnodes in arc
parm:           zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes
parm:           zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin (ulong)
parm:           zfs_arc_eviction_pct:When full, ARC allocation waits for eviction of this % of alloc size (int)
parm:           zfs_arc_evict_batch_limit:The number of headers to evict per sublist before moving to the next (int)

Another thing, when I run zfs --version in command line, it actually shows

zfs-0.8.3-1ubuntu12.12
zfs-kmod-2.0.2-1ubuntu5

The problem looks like that case. https://github.com/lxc/lxd/issues/4251
systemctl reload snap.lxd.daemon should solve the problem.
Regards.

Still not working.

Any help will be much appreciated

Thanks!

Humm, can you show the lsmod | grep -i zfs command output and the lxc storage show default, forget to mention and the output of the journalctl -u snap.lxd.daemon | tail -n 50
Thanks.

The output of lsmod | grep -i zfs is

zfs                  4182016  6
zunicode              331776  1 zfs
zzstd                 532480  1 zfs
zlua                  147456  1 zfs
zavl                   16384  1 zfs
icp                   303104  1 zfs
zcommon                98304  2 zfs,icp
znvpair                90112  2 zfs,zcommon
spl                   102400  6 zfs,icp,zzstd,znvpair,zcommon,zavl

The output of lxc storage show default is

config:
  size: 15GB
  source: /var/snap/lxd/common/lxd/disks/default.img
  zfs.pool_name: default
description: ""
name: default
driver: zfs
used_by:
- /1.0/containers/aqua-golive
- /1.0/containers/aqua-stage
- /1.0/containers/aqua-stage-infodev
- /1.0/containers/nepalaundry-odoo14
- /1.0/images/2f4f6283a82623fdacd079e72a3dc7f4486e4e7e74874a930720d221da531b6a
- /1.0/images/7a35d7e3a068dcde46033623612370aab760b712f1aba6f847acd38d865d2db6
- /1.0/images/7af59de84a7d684218142095815e2cae1ab1185eadb0a6375b830d0b12e0d266
- /1.0/images/b20f0cac0892cee029e5c65e8a36c7684e0d685bd0b22f839af5fd81a51b5f16
- /1.0/profiles/default
status: Created
locations:
- none

And the output of journalctl -u snap.lxd.daemon | tail -n 50 is

सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:   5: fd:  11: pids
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:   6: fd:  12: rdma
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:   7: fd:  13: net_cls,net_prio
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:   8: fd:  14: hugetlb
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:   9: fd:  15: blkio
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:  10: fd:  16: devices
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:  11: fd:  17: name=systemd
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3191]:  12: fd:  18: unified
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3100]: => Starting LXD
सितम्बर 03 16:13:55 aquaworld-HVM-domU lxd.daemon[3199]: t=2021-09-03T16:13:55+0545 lvl=eror msg="Error initializing storage pool \"default\": The \"zfs\" tool is not enabled, correct functionality of the storage pool cannot be guaranteed"
सितम्बर 03 16:13:56 aquaworld-HVM-domU lxd.daemon[3100]: => LXD is ready
सितम्बर 03 16:16:25 aquaworld-HVM-domU systemd[1]: Stopping Service for snap application lxd.daemon...
सितम्बर 03 16:16:25 aquaworld-HVM-domU lxd.daemon[3350]: => Stop reason is: host shutdown
सितम्बर 03 16:16:25 aquaworld-HVM-domU lxd.daemon[3350]: => Stopping LXD (with container shutdown)
सितम्बर 03 16:16:25 aquaworld-HVM-domU lxd.daemon[3350]: => Stopping LXCFS
सितम्बर 03 16:16:25 aquaworld-HVM-domU lxd.daemon[3100]: => LXD exited cleanly
सितम्बर 03 16:16:26 aquaworld-HVM-domU systemd[1]: snap.lxd.daemon.service: Succeeded.
सितम्बर 03 16:16:26 aquaworld-HVM-domU systemd[1]: Stopped Service for snap application lxd.daemon.
सितम्बर 03 16:16:26 aquaworld-HVM-domU systemd[1]: Started Service for snap application lxd.daemon.
सितम्बर 03 16:16:26 aquaworld-HVM-domU lxd.daemon[3454]: => Preparing the system
सितम्बर 03 16:16:26 aquaworld-HVM-domU lxd.daemon[3454]: ==> Loading snap configuration
सितम्बर 03 16:16:26 aquaworld-HVM-domU lxd.daemon[3454]: ==> Setting up mntns symlink (mnt:[4026532441])
सितम्बर 03 16:16:26 aquaworld-HVM-domU lxd.daemon[3454]: ==> Setting up kmod wrapper
सितम्बर 03 16:16:26 aquaworld-HVM-domU lxd.daemon[3454]: ==> Preparing /boot
सितम्बर 03 16:16:26 aquaworld-HVM-domU lxd.daemon[3454]: ==> Preparing a clean copy of /run
सितम्बर 03 16:16:26 aquaworld-HVM-domU lxd.daemon[3454]: ==> Preparing a clean copy of /etc
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3454]: ==> Setting up ceph configuration
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3454]: ==> Setting up LVM configuration
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3454]: ==> Rotating logs
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3454]: ==> Escaping the systemd cgroups
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3454]: ==> Escaping the systemd process resource limits
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3454]: => Starting LXCFS
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]: mount namespace: 5
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]: hierarchies:
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   0: fd:   6: memory
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   1: fd:   7: cpuset
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   2: fd:   8: perf_event
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   3: fd:   9: freezer
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   4: fd:  10: cpu,cpuacct
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   5: fd:  11: pids
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   6: fd:  12: rdma
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   7: fd:  13: net_cls,net_prio
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   8: fd:  14: hugetlb
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:   9: fd:  15: blkio
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:  10: fd:  16: devices
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:  11: fd:  17: name=systemd
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3545]:  12: fd:  18: unified
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3454]: => Starting LXD
सितम्बर 03 16:16:27 aquaworld-HVM-domU lxd.daemon[3552]: t=2021-09-03T16:16:27+0545 lvl=eror msg="Error initializing storage pool \"default\": The \"zfs\" tool is not enabled, correct functionality of the storage pool cannot be guaranteed"
सितम्बर 03 16:16:28 aquaworld-HVM-domU lxd.daemon[3454]: => LXD is ready

Thanks!

There is something wrong with your configuration, in the storage_version there should be your zfs module version. And there must be “storage_supported_drivers” section.
Could you refresh your snap installation, if you dont mind?
Thanks.

As you don’t look like you are running the snap package (and really you should be so that you are using LXD 4.0 LTS branch rather than LXD 3.0 LTS) then you will need the ZFS tools installed too:

So try

sudo apt install zfsutils-linux

Refreshed my snap installation didn’t work

Can you post the snap list output?

Tried installing the above command, and the problem still exists.

Actually the output of the service, when I run sudo systemctl status snap.lxd.daemon is

● snap.lxd.daemon.service - Service for snap application lxd.daemon
     Loaded: loaded (/etc/systemd/system/snap.lxd.daemon.service; static; vendor preset: enabled)
     Active: active (running) since Fri 2021-09-03 18:35:21 +0545; 4min 54s ago
TriggeredBy: ● snap.lxd.daemon.unix.socket
   Main PID: 4885 (daemon.start)
      Tasks: 0 (limit: 4638)
     Memory: 324.0K
     CGroup: /system.slice/snap.lxd.daemon.service
             ‣ 4885 /bin/sh /snap/lxd/11348/commands/daemon.start

सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4976]:   6: fd:  12: rdma
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4976]:   7: fd:  13: net_cls,net_prio
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4976]:   8: fd:  14: hugetlb
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4976]:   9: fd:  15: blkio
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4976]:  10: fd:  16: devices
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4976]:  11: fd:  17: name=systemd
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4976]:  12: fd:  18: unified
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4885]: => Starting LXD
सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4983]: t=2021-09-03T18:35:21+0545 lvl=eror msg="Error initializing storage pool \"default\": The \"zfs\" tool is not e>
सितम्बर 03 18:35:22 aquaworld-HVM-domU lxd.daemon[4885]: => LXD is ready

*The extended output in more detail

सितम्बर 03 18:35:21 aquaworld-HVM-domU lxd.daemon[4983]: t=2021-09-03T18:35:21+0545 lvl=eror msg="Error initializing storage pool \"default\": The \"zfs\" tool is not enabled, correct functionality of the storage pool cannot be guaranteed"

Can you post the snap list command please?
Thanks.

snap list displays following

Name     Version    Rev    Tracking       Publisher     Notes
certbot  1.18.0     1343   latest/stable  certbot-eff✓  classic
core     16-2.51.4  11606  latest/stable  canonical✓    core
core20   20210702   1081   latest/stable  canonical✓    base
lxd      3.0.4      11348  3.0/stable/…   canonical✓    -

Ok, So can you sudo snap refresh --channel=4.0/stable lxd to upgrade your lxd version please?

2 Likes