LXD 5.12: ZFS stopped working in lxd - Error: Required tool 'zpool' is missing when kernel ZFS module version < 0.8

My lxc containers that is on zfs pool unexpectedly stopped working. When I’m trying to start them, lxd gives: Error: Required tool 'zpool' is missing

But zpool is NOT missing:

which zpool

And I can run zpool.
lxd version 5.12. I’ve tried to uninstall it and install again, but now I have no zfs support (in lxd init it doesn’t offer zfs).
modinfo zfs gives
filename: /lib/modules/4.15.0-206-generic/kernel/zfs/zfs.ko
version: 0.7.5-1ubuntu16.12
license: CDDL
author: OpenZFS on Linux
description: ZFS
srcversion: 7C105EF1C775F5F2F9DF168
depends: spl,znvpair,zcommon,zunicode,zavl,icp
retpoline: Y
name: zfs
vermagic: 4.15.0-206-generic SMP mod_unload modversions
signat: PKCS#7
sig_hashalgo: md4
parm: zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm: zvol_major:Major number for zvol device (uint)
parm: zvol_threads:Max number of threads to handle I/O requests (uint)
parm: zvol_request_sync:Synchronously handle bio requests (uint)
parm: zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm: zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm: zvol_volmode:Default volmode property value (uint)
parm: zio_delay_max:Max zio millisec delay before posting event (int)
parm: zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (int)
parm: zfs_sync_pass_deferred_free:Defer frees starting in this pass (int)
parm: zfs_sync_pass_dont_compress:Don’t compress starting in this pass (int)
parm: zfs_sync_pass_rewrite:Rewrite new bps starting in this pass (int)
parm: zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline (int)
parm: zil_replay_disable:Disable intent logging replay (int)
parm: zfs_nocacheflush:Disable cache flushes (int)
parm: zil_slog_bulk:Limit in bytes slog sync writes per commit (ulong)
parm: zfs_object_mutex_size:Size of znode hold array (uint)
parm: zfs_delete_blocks:Delete files larger than N blocks async (ulong)
parm: zfs_read_chunk_size:Bytes to read per chunk (long)
parm: zfs_immediate_write_sz:Largest data block to write to zil (long)
parm: zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm: zfs_dbgmsg_maxsize:Maximum ZFS debug log size (int)
parm: zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm: zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm: zfs_vdev_raidz_impl:Select raidz implementation.
parm: zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int)
parm: zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int)
parm: zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int)
parm: zfs_vdev_max_active:Maximum number of active I/Os per vdev (int)
parm: zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold (int)
parm: zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold (int)
parm: zfs_vdev_async_read_max_active:Max active async read I/Os per vdev (int)
parm: zfs_vdev_async_read_min_active:Min active async read I/Os per vdev (int)
parm: zfs_vdev_async_write_max_active:Max active async write I/Os per vdev (int)
parm: zfs_vdev_async_write_min_active:Min active async write I/Os per vdev (int)
parm: zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev (int)
parm: zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev (int)
parm: zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev (int)
parm: zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev (int)
parm: zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev (int)
parm: zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev (int)
parm: zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev (int)
parm: zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/O’s (int)
parm: zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/O’s (int)
parm: zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment (int)
parm: zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/O’s (int)
parm: zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/O’s (int)
parm: zfs_vdev_scheduler:I/O scheduler (charp)
parm: zfs_vdev_cache_max:Inflate reads small than max (int)
parm: zfs_vdev_cache_size:Total size of the per-disk cache (int)
parm: zfs_vdev_cache_bshift:Shift size to inflate reads too (int)
parm: metaslabs_per_vdev:Divide added vdev into approximately (but no more than) this number of metaslabs (int)
parm: zfs_txg_timeout:Max seconds worth of delta per txg (int)
parm: zfs_read_history:Historical statistics for the last N reads (int)
parm: zfs_read_history_hits:Include cache hits in read history (int)
parm: zfs_txg_history:Historical statistics for the last N txgs (int)
parm: zfs_multihost_history:Historical statistics for last N multihost writes (int)
parm: zfs_flags:Set additional debugging flags (uint)
parm: zfs_recover:Set to attempt to recover from fatal errors (int)
parm: zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space (int)
parm: zfs_deadman_synctime_ms:Expiration time in milliseconds (ulong)
parm: zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds (ulong)
parm: zfs_deadman_enabled:Enable deadman timer (int)
parm: spa_asize_inflation:SPA size estimate multiplication factor (int)
parm: spa_slop_shift:Reserved free space in pool (int)
parm: spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp)
parm: zfs_autoimport_disable:Disable pool import at module load (int)
parm: spa_load_verify_maxinflight:Max concurrent traversal I/Os while verifying pool during import -X (int)
parm: spa_load_verify_metadata:Set to traverse metadata on pool import (int)
parm: spa_load_verify_data:Set to traverse data on pool import (int)
parm: zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread (uint)
parm: zfs_multilist_num_sublists:Number of sublists used in each multilist (int)
parm: zfs_multihost_fail_intervals:Max allowed period without a successful mmp write (uint)
parm: zfs_multihost_interval:Milliseconds between mmp writes to each leaf
parm: zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity (uint)
parm: metaslab_aliquot:allocation granularity (a.k.a. stripe size) (ulong)
parm: metaslab_debug_load:load all metaslabs when pool is first opened (int)
parm: metaslab_debug_unload:prevent metaslabs from being unloaded (int)
parm: metaslab_preload_enabled:preload potential metaslabs during reassessment (int)
parm: zfs_mg_noalloc_threshold:percentage of free space for metaslab group to allow allocation (int)
parm: zfs_mg_fragmentation_threshold:fragmentation for metaslab group to allow allocation (int)
parm: zfs_metaslab_fragmentation_threshold:fragmentation for metaslab to allow allocation (int)
parm: metaslab_fragmentation_factor_enabled:use the fragmentation metric to prefer less fragmented metaslabs (int)
parm: metaslab_lba_weighting_enabled:prefer metaslabs with lower LBAs (int)
parm: metaslab_bias_enabled:enable metaslab group biasing (int)
parm: zfs_metaslab_segment_weight_enabled:enable segment-based metaslab selection (int)
parm: zfs_metaslab_switch_threshold:segment-based metaslab selection maximum buckets before switching (int)
parm: zfs_zevent_len_max:Max event queue length (int)
parm: zfs_zevent_cols:Max event column width (int)
parm: zfs_zevent_console:Log events to the console (int)
parm: zfs_top_maxinflight:Max I/Os per top-level (int)
parm: zfs_resilver_delay:Number of ticks to delay resilver (int)
parm: zfs_scrub_delay:Number of ticks to delay scrub (int)
parm: zfs_scan_idle:Idle window in clock ticks (int)
parm: zfs_scan_min_time_ms:Min millisecs to scrub per txg (int)
parm: zfs_free_min_time_ms:Min millisecs to free per txg (int)
parm: zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int)
parm: zfs_no_scrub_io:Set to disable scrub I/O (int)
parm: zfs_no_scrub_prefetch:Set to disable scrub prefetching (int)
parm: zfs_free_max_blocks:Max number of blocks freed in one txg (ulong)
parm: zfs_free_bpobj_enabled:Enable processing of the free_bpobj (int)
parm: zfs_dirty_data_max_percent:percent of ram can be dirty (int)
parm: zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM (int)
parm: zfs_delay_min_dirty_percent:transaction delay threshold (int)
parm: zfs_dirty_data_max:determines the dirty space limit (ulong)
parm: zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes (ulong)
parm: zfs_dirty_data_sync:sync txg when this much dirty data (ulong)
parm: zfs_delay_scale:how quickly delay approaches infinity (ulong)
parm: zfs_sync_taskq_batch_pct:max percent of CPUs that are used to sync dirty data (int)
parm: zfs_zil_clean_taskq_nthr_pct:max percent of CPUs that are used per dp_sync_taskq (int)
parm: zfs_zil_clean_taskq_minalloc:number of taskq entries that are pre-populated (int)
parm: zfs_zil_clean_taskq_maxalloc:max number of taskq entries that are cached (int)
parm: zfs_max_recordsize:Max allowed record size (int)
parm: zfs_prefetch_disable:Disable all ZFS prefetching (int)
parm: zfetch_max_streams:Max number of streams per zfetch (uint)
parm: zfetch_min_sec_reap:Min time before stream reclaim (uint)
parm: zfetch_max_distance:Max bytes to prefetch per stream (default 8MB) (uint)
parm: zfetch_array_rd_sz:Number of bytes in a array_read (ulong)
parm: zfs_pd_bytes_max:Max number of bytes to prefetch (int)
parm: ignore_hole_birth:Alias for send_holes_without_birth_time (int)
parm: send_holes_without_birth_time:Ignore hole_birth txg for zfs send (int)
parm: zfs_send_corrupt_data:Allow sending corrupt data (int)
parm: dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once (int)
parm: zfs_mdcomp_disable:Disable meta data compression (int)
parm: zfs_nopwrite_enabled:Enable NOP writes (int)
parm: zfs_per_txg_dirty_frees_percent:percentage of dirtied blocks from frees in one TXG (ulong)
parm: zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes (int)
parm: zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int)
parm: zfs_dbuf_state_index:Calculate arc header index (int)
parm: dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache. (ulong)
parm: dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes when dbufs must be evicted directly. (uint)
parm: dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when the evict thread stops evicting dbufs. (uint)
parm: dbuf_cache_max_shift:Cap the size of the dbuf cache to a log2 fraction of arc size. (int)
parm: zfs_arc_min:Min arc size (ulong)
parm: zfs_arc_max:Max arc size (ulong)
parm: zfs_arc_meta_limit:Meta limit for arc size (ulong)
parm: zfs_arc_meta_limit_percent:Percent of arc size for arc meta limit (ulong)
parm: zfs_arc_meta_min:Min arc metadata (ulong)
parm: zfs_arc_meta_prune:Meta objects to scan for prune (int)
parm: zfs_arc_meta_adjust_restarts:Limit number of restarts in arc_adjust_meta (int)
parm: zfs_arc_meta_strategy:Meta reclaim strategy (int)
parm: zfs_arc_grow_retry:Seconds before growing arc size (int)
parm: zfs_arc_p_aggressive_disable:disable aggressive arc_p grow (int)
parm: zfs_arc_p_dampener_disable:disable arc_p adapt dampener (int)
parm: zfs_arc_shrink_shift:log2(fraction of arc to reclaim) (int)
parm: zfs_arc_pc_percent:Percent of pagecache to reclaim arc to (uint)
parm: zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p (int)
parm: zfs_arc_average_blocksize:Target average block size (int)
parm: zfs_compressed_arc_enabled:Disable compressed arc buffers (int)
parm: zfs_arc_min_prefetch_lifespan:Min life of prefetch block (int)
parm: l2arc_write_max:Max write bytes per interval (ulong)
parm: l2arc_write_boost:Extra write bytes during device warmup (ulong)
parm: l2arc_headroom:Number of max device writes to precache (ulong)
parm: l2arc_headroom_boost:Compressed l2arc_headroom multiplier (ulong)
parm: l2arc_feed_secs:Seconds between L2ARC writing (ulong)
parm: l2arc_feed_min_ms:Min feed interval in milliseconds (ulong)
parm: l2arc_noprefetch:Skip caching prefetched buffers (int)
parm: l2arc_feed_again:Turbo L2ARC warmup (int)
parm: l2arc_norw:No reads during writes (int)
parm: zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes (int)
parm: zfs_arc_sys_free:System free memory target size in bytes (ulong)
parm: zfs_arc_dnode_limit:Minimum bytes of dnodes in arc (ulong)
parm: zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes (ulong)
parm: zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin (ulong)
parm: zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
parm: zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)

How to solve it? Now I copy all my containers to another server, then I’ll try to reinstall all system and move lxd files back, so I have little time to try repair something on this server.

You need to update your ZFS version to 0.8 at least.

With the switch to core22 as part of LXD 5.12, we’ve had to drop support for ZFS 0.6 and ZFS 0.7 as both are extremely outdated and unsupported, to the point where we couldn’t build the userspace tools with the recent toolchain coming with core22 :frowning:

In your case, it should just be a matter of changing the ZFS repo to point to something more recent, update the packages, reboot the system and you’ll then have a ZFS kernel module that the LXD snap can deal with.

1 Like

Now my version is 5.12 because I had to uninstall and reinstall lxd. I don’t know what version was before, but even then the containers suddenly stopped working. How did you figure out if my zfs version is 0.6 or 0.7 and how to upgrade to 0.8?

Thank you for your clues, I’ve installed version 5.11 and now all working again! But I’m afraid that something will autoupdate lxd and containers will be unavailable again. By the way
modinfo zfs | grep version showed
version: 0.7.5-1ubuntu16.12
I still don’t know, how to upgrade to 8.0. apt install gives this version as the the newest one. And will new zfs work with my pools made with older zfs version?
By the way some times I stumbling upon error ‘missing profile snap.certbot.certbot.
Please make sure that the snapd.apparmor service is enabled and started’ in my nginx lxc container. But I can resolve it by reinstalling certbot.
Thank you for good support and fast responses!

1 Like

The ZFS kernel module version needs to align with one of the versions of the tooling the LXD snap bundles, which from LXD 5.12 onwards (due to the switch to core22 base) is ZFS 0.8 and greater.

What distro/kernel version are you running?

For Ubuntu for instance, which bundles ZFS module, using the HWE kernel can get you a newer ZFS module.

For example on Ubuntu 18.04:

sudo apt-get install --install-recommends linux-generic-hwe-18.04

Alternatively you can switch back to LXD 5.11 temporarily using:

snap refresh lxd --channel=5.11/stable

to allow continued operation until you have upgraded ZFS.

Be aware that this means you will not get security or bug fix updates on this channel because LXD 5.11 is no longer supported.

lsb_release -a says that I have Ubuntu 18.04.6 LTS.
uname -r says 4.15.0-206-generic. So here I found out that my distro has GA Kernel. Also there I found a command that add HVE to my 18.04.
I’ve executed sudo apt-get install --install-recommends linux-generic-hwe-18.04 , rebooted system and now uname -r gives me 5.4.0-144-generic. Updated with apt update. But when I run apt install zfsutils-linux it says zfsutils-linux is already the newest version (0.7.5-1ubuntu16.12). So zfs still old(

Is LXD still giving you the same error though with the new kernel?
As the host tooling version doesn’t matter I think (as its bundled in the snap, you don’t actually need it installed at all) its the host ZFS kernel module that matters.

No, it works, I just want to update zfs, so I could update to lxd 5.12 from 5.11. If host tooling doesn’t matter, I try to update now.

I just tested this on an fresh LXD Ubuntu Bionic VM and it works:

uname -a
Linux vtest 4.15.0-206-generic

snap install lxd

lxc storage create zfs zfs
Error: Required tool 'zpool' is missing

apt-get install --install-recommends linux-generic-hwe-18.04

uname -a
Linux vtest 5.4.0-144-generic

lxc storage create zfs zfs
Storage pool zfs created
1 Like

Yes, it works. I’ve updated through snap refresh lxd --channel=latest/candidate and now version is 5.12 and all works. Thank you for your help guys!

1 Like

got this problem today on my development machine running Linuxmint Linux Mint 19.3 Tricia,

sudo modinfo zfs | grep version
version: 0.7.5-1ubuntu16.10
srcversion: 67FB53EEE2E7A895E7E0074

now how to upgrade linuxmint 19.5 to have the zfs 0.8?

for now, snap revert lxd, moves back to 5.11, don’t know if it will be upgraded again to 5.12 next time

Mint 19.3 goes EOL in April 2023 anyway so a good time to upgrade. Although there may be an option to get a newer kernel, but I’m not too familiar with that distribution.

LXD 5.11 is EOL now as its a monthly release and only supported until the next one.
In time the LXD 5.0.x LTS series will also be switched to core22 base package.

Same problem here with Debian 10 and lxd snap.
Today my LXD stopped listing containers and I’m worried when the server restarts my containers refused to start.

Looks like more recent ZFS modules are available to Buster in backports:


1 Like

I’m having the same problem with Ubuntu Lunar on a raspberry pi4, which I upgraded from kinetic.

What’s a bit different here is that I had to use the zfs-dkms package to get the zfs module, because it’s not yet available for arm64 in the lunar kernel (see bug #2015001).

lxd snap:

root@pi4:~# snap list lxd
Name  Version       Rev    Tracking       Publisher   Notes
lxd   5.12-c63881f  24646  latest/stable  canonical✓  -

ZFS info:

root@pi4:~# modinfo zfs | grep version
version:        2.1.9-2ubuntu1
srcversion:     28273DC77D01551AE1EDABD
vermagic:       6.2.0-1003-raspi SMP preempt mod_unload modversions aarch64

root@pi4:~# dmesg|grep -i zfs
[ 4255.153211] ZFS: Loaded module v2.1.9-2ubuntu1, ZFS pool version 5000, ZFS filesystem version 5

root@pi4:~# zpool list
storage  1.81T  1.02T   810G        -         -    19%    56%  1.00x    ONLINE  -

zfs dkms package:

root@pi4:~# dpkg -l zfs-dkms 
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version        Architecture Description
ii  zfs-dkms       2.1.9-2ubuntu1 all          OpenZFS filesystem kernel modules for Linux

lxc failing:

root@pi4:~# lxc storage list
Error: Required tool 'zpool' is missing

Running kernel

root@pi4:~# uname -a
Linux pi4 6.2.0-1003-raspi #3-Ubuntu SMP PREEMPT Thu Mar  9 19:24:05 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

Is there any workaround possible? I’m using ZFS 2.2.0rc1 which works perfectly fine when using LXD on Archlinux without Snap - and it seems to be just a version check - however on an Ubuntu 22.04 Server it’s now failing after raising the module version to 2.2 :confused:

I’m pretty sure the actual version doesn’t matter as much as it worked fine using 2.2.0rc1 using the 2.1.99 version using git builds (I need overlayfs and idmapped mounts, both worked also fine).

edit: updating to the edge channels works fine. there zfs 2.2 is included

1 Like

This will be in LXD 5.16

1 Like