Zfs storage - confusion/questions

It seems zfs is reccomended storage for incus. I have used zfs long back.

When you setup incus, it asks you to choose parotion/disk for ZFS.

1 Even If I choose block device, it just creats zpool with 1 device. Same thing again, whats the point of using zfs with only 1 device mirror?

2 I tried setting up k8s (with containerd as CRI) and it failed with zfs backend storage

filesystem on '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/48/fs' not supported as upperdir

I saw @stgraber video where he recommended adding another disk (e.g. on btrfs storage) to mount CRI’s directory on this device. Is there any more improved approach now?

3 I read somewhere that zfs 2.2 now supports overlayfs by default. I am not sure how to get latest opnezfs on ubuntu 20LTS. Any idea how that can be done?

4 I checekd info incus info --verbose and found below for zfs. does this means that incus 6.4 comes with this zfs by default or its just picking default values offered by OS? I guess zfs is part of linux kernel for some time now.

  storage_supported_drivers:
  - name: zfs
    version: 0.8.3-1ubuntu12.18
    remote: false

1 Even If I choose block device, it just creats zpool with 1 device. Same thing again, whats the point of using zfs with only 1 device mirror?

You don’t need to get incus to create the ZFS pool. You can create it yourself, and either give the whole pool, or give it a dataset within that pool, so you can also use the pool for other things (which is what I do)

e.g. create zpool called “zfs” and dataset “zfs/incus”, then create an incus storage pool with zfs/incus

But even if you’ve created a zfs pool with a single disk / single vdev, you can easily add a second disk to the vdev to make a mirrored pair.

2 I tried setting up k8s (with containerd as CRI)

Can’t help you with CRI on zfs, sorry. Of course, that’s not anything to do with incus.

Running docker alongside incus is not recommended because of how they fight over iptables rules; I don’t know whether k8s has a similar issue. It might be better to nest one inside the other.

3 I read somewhere that zfs 2.2 now supports overlayfs by default. I am not sure how to get latest opnezfs on ubuntu 20LTS. Any idea how that can be done?

If you need zfs 2.2 then you’d really want to upgrade to Ubuntu 24.04. Maybe 22.04 with the HWE kernel would do, but you’ll still have 2.1.5 userland utilities. 20.04 will be end-of-life in a little over half a year anyway.

4 I checekd info incus info --verbose and found below for zfs. does this means that incus 6.4 comes with thsi zfs by default

Yes, Incus supports zfs by default. I believe it looks for the zfsutils present on the system, and will load the kernel module if required (see another discussion thread about that)

Nah
I was able to setup HA k8s (3 masters node) with contaienrd CRI (not docker) once I used btrfs storage. OverlayFS support (d_revalidate out and support renameat2 flags) by snajpa · Pull Request #9414 · openzfs/zfs · GitHub. zfs have issues with overlayfs that is being used in contaienrs.

I understand ubuntu 20 is about to EOL. But to get new version of a package upgrading OS is not a efficient way of operaing.

Wait
isn’t dataset is like a folder in ZFS or just FS like. But zfs volume is like block device. So how does using DS is benificial?

No, I take the opposite view. The whole point of an LTS distribution is that things don’t change during their lifetime. If you install a distribution with zfs 0.8.3, then you get zfs 0.8.3 for the whole OS lifetime - with only security fixes or serious bugfixes backported. This means no unpleasant surprises which could break your applications, but of course also means that you don’t get the new shiniest features.

If you want the new shiniest features all the time, then you want a rolling distro. You can’t have it both ways.

I think you may be confusing a few terms, in particular zpool and zvol.

  • A zpool is the aggregate raw block storage managed by zfs. This consists of one or more vdevs, each of which could be a single disk (or partition), or zmirror set of disks, or a raidz set of disks, etc.
  • From the storage provided by a zpool, you can create:
    • zfs datasets, which are filesystems. You store files on them. They are hierarchical, so a dataset can contain other datasets, but each dataset can be mounted at a different place (or not mounted at all) so they don’t have to appear like subdirectories on the host.
    • zvols, which are block devices similar to an LVM logical volume. You use these for things that need raw block storage, e.g. VM disk images.

If you dedicate an entire zpool to incus, then you can’t use the storage on those disks for anything else.

However, if you give a dataset to incus, then it creates datasets and zvols as children of that dataset. You use it in exactly the same way, BUT you can also create additional top-level datasets within the zpool, that incus isn’t aware of at all.

1 Like

Ubuntu comes with a Linux kernel that has the ZFS filesystem built-in. This means that when you use Ubuntu, you are stuck with the ZFS of that Linux kernel.

Other Linux distributions like Debian also support ZFS, but you get ZFS through DKMS. So far so good. But if you want the latest ZFS, you can install the distribution package by @stgraber. It’s also DKMS.

If you want to use ZFS on Incus, you need Linux kernel support for ZFS (built-in in Ubuntu, DKMS with other Linux distributions. And you also need the ZFS utilities. On Ubuntu the package is zfsutils-linux. If you do not have that package, you will not be prompted by incus admin init to setup ZFS.

1 Like

I took this free course and the next (paid) one (L2) which helped me a ton to grasp on ZFS. The course content might be a bit old but is still valid. You’ll appreciate how powerful ZFS is. BTW, I am only sharing my experience. No way “promoting” this course :slight_smile: https://www.udemy.com/course/build-your-professional-advanced-storage-box-level-1/

If I install this on ubuntu, will it cause conflict with existing zfs package that is already in kernel? Is there any way to disable/uninstall the existing zfs that came with kernel?

I am not sure about that. Can you give it a try by installing in a VM (images:ubuntu/20.04/cloud)? In general, as policy, you would perform the dry run in a VM before trying on your host.

The zfs.ko kernel module of Ubuntu is part of the Linux kernel package. This means that you can blacklist the ZFS kernel modules in /etc/modprobe.d/, then reboot. Then, you can install the new packages and fear no conflicts.

Since I was on a test system
I just installed it directly there. This is GCP VM. Ubuntu 20. They have their own kernel with this 5.15.0-1067-gcp. zfs.ko is not available. Strange.

apt install openzfs-zfsutils openzfs-zfs-dkms openzfs-zfs-initramfs
Building initial module for 5.15.0-1067-gcp

Can't load /var/lib/shim-signed/mok/.rnd into RNG
140072541992256:error:2406F079:random number generator:RAND_load_file:Cannot open file:../crypto/rand/randfile.c:98:Filename=/var/lib/shim-signed/mok/.rnd
Generating a RSA private key
...............................................................................................................+++++
..............................................................................................................+++++
writing new private key to '/var/lib/shim-signed/mok/MOK.priv'
-----
Secure Boot not enabled on this system.
Done.

zfs.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.15.0-1067-gcp/updates/dkms/

spl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.15.0-1067-gcp/updates/dkms/

depmod...

DKMS: install completed.
Setting up g++-9 (9.4.0-1ubuntu1~20.04.2) ...
Setting up g++ (4:9.3.0-1ubuntu2) ...
update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode
Setting up build-essential (12.8ubuntu1.1) ...
Processing triggers for initramfs-tools (0.136ubuntu6.7) ...
update-initramfs: Generating /boot/initrd.img-5.15.0-1067-gcp
Processing triggers for libc-bin (2.31-0ubuntu9.16) ...
Processing triggers for systemd (245.4-4ubuntu3.23) ...
Processing triggers for man-db (2.9.1-1) ...
Setting up openzfs-zfs-zed (2.2.5-amd64-202408071636-ubuntu20.04) ...
Created symlink /etc/systemd/system/zed.service → /lib/systemd/system/zfs-zed.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-zed.service → /lib/systemd/system/zfs-zed.service.
Setting up openzfs-zfs-initramfs (2.2.5-amd64-202408071636-ubuntu20.04) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for systemd (245.4-4ubuntu3.23) ...
Processing triggers for initramfs-tools (0.136ubuntu6.7) ...
update-initramfs: Generating /boot/initrd.img-5.15.0-1067-gcp

This warning came along which is expected I think.

I have another physical server where ubuntu 20.6 is on 5.4.0-193-generic. I will testout things as per your suggestion there.

@simos

Got this while installing openzfs from repo on the VM created on incus. The installation process halted for sometime at Building initial module for 5.4.0-193-generic and then this appeared. Once I proceeded the installaiont went fine and I could see lsmod showing zfs modules. I tried rebooting VM via incus restart. After this the VM did not came up for 5 to 6 mins. VM agent isn’t currently running.
Eventually it came up but i don’t see any zfs module loaded now at all.

edit - when I run any zfs command, it fails with.

Failed to initialize the libzfs library.
dmesg -T
Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7

The physical server I have may have secure boot. Its remote. I can’t switch to legacy bios. There is no IPMI or any other remote management tool available there. I just have plain ssh access to it.

Incus itself runs with secureboot by default, so you may need to set security.secureboot=false on the VM.

Here’s me launching an Ubuntu 20.04 LTS VM in Incus, checking the current version of ZFS and then installing ZFS from the Zabbly ZFS repository. Indeed, there’s an issue with SecureBoot. However, I used the workaround and did not disable SecureBoot.

$ incus launch images:ubuntu/20.04/cloud OldUbuntuNewZFS --vm
Launching OldUbuntuNewZFS
$ incus ubuntu OldUbuntuNewZFS
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@OldUbuntuNewZFS:~$ modinfo zfs | head
filename:       /lib/modules/5.4.0-193-generic/kernel/zfs/zfs.ko
version:        0.8.3-1ubuntu12.17
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     3CA966C3C34DC2BFBA99C85
depends:        spl,znvpair,icp,zlua,zunicode,zcommon,zavl
retpoline:      Y
ubuntu@OldUbuntuNewZFS:~$ sudo apt install zfsutils-linux
...
ubuntu@OldUbuntuNewZFS:~$ zfs --version
zfs-0.8.3-1ubuntu12.18
zfs-kmod-0.8.3-1ubuntu12.17
ubuntu@OldUbuntuNewZFS:~$

<<<here install ZFS from the Zabbly repository>>>

ubuntu@OldUbuntuNewZFS:~$ sudo apt remove zfsutils-linux
...
ubuntu@OldUbuntuNewZFS:~$ sudo apt-get install openzfs-zfsutils openzfs-zfs-dkms openzfs-zfs-initramfs
...

You are asked to setup a special ‘password’ for SecureBoot that you must use when you reboot the VM. By giving the same special password, the system accepts your changes to those kernel modules and you do not need to disable SecureBoot.

ubuntu@OldUbuntuNewZFS:~$ logout
$ incus stop OldUbuntuNewZFS
$ incus start OldUbuntuNewZFS --console
To detach from the console, press: <ctrl>+a q

<<< Here give the special password that was set just previously.>>>


$ incus ubuntu OldUbuntuNewZFS
ubuntu@OldUbuntuNewZFS:~$ zfs list
no datasets available
ubuntu@OldUbuntuNewZFS:~$ zfs --version
zfs-2.2.5-1
zfs-kmod-2.2.5-1
ubuntu@OldUbuntuNewZFS:~$ modinfo zfs | head
filename:       /lib/modules/5.4.0-193-generic/updates/dkms/zfs.ko
version:        2.2.5-1
license:        CDDL
license:        Dual BSD/GPL
license:        Dual MIT/GPL
author:         OpenZFS
description:    ZFS
alias:          zzstd
alias:          zcommon
alias:          zunicode
ubuntu@OldUbuntuNewZFS:~$
2 Likes

Thanks for the workaround.

However, as I hinted earlier, my physical server is remote and witiout any IPMI etc. So doing this there might be tricky.

Having said this - I upgraded 20 to 22 to 24. ubuntu nobel gives me zfs 2.2.2. Where we don’t have issue with overlayfs vs zfs.

Here is how my zfs storage is looking now on that machine.

incus storage show zdata
config:
  source: tank/incus
  volatile.initial_source: tank/incus
  zfs.pool_name: tank/incus
description: ""
name: zdata
driver: zfs

root@chennai20:~/lab/k8s# zfs list
NAME                                                                                 USED  AVAIL  REFER  MOUNTPOINT
tank                                                                                4.22G   535G    24K  /tank
tank/incus                                                                          4.22G   296G    24K  legacy
tank/incus/buckets                                                                    24K   296G    24K  legacy
tank/incus/containers                                                               3.93G   296G    24K  legacy
tank/incus/containers/k8s                                                            289M   296G   563M  legacy
tank/incus/containers/master1                                                       1.04G   296G  1.31G  legacy
tank/incus/containers/master2                                                       1.00G   296G  1.27G  legacy
tank/incus/containers/master3                                                       1.00G   296G  1.27G  legacy
tank/incus/containers/worker1                                                        618M   296G   892M  legacy
tank/incus/custom                                                                     24K   296G    24K  legacy
tank/incus/deleted                                                                   144K   296G    24K  legacy
tank/incus/deleted/buckets                                                            24K   296G    24K  legacy
tank/incus/deleted/containers                                                         24K   296G    24K  legacy
tank/incus/deleted/custom                                                             24K   296G    24K  legacy
tank/incus/deleted/images                                                             24K   296G    24K  legacy
tank/incus/deleted/virtual-machines                                                   24K   296G    24K  legacy
tank/incus/images                                                                    295M   296G    24K  legacy
tank/incus/images/078d35fa750fbdfc2d3a1b9999a2de8186be89900a1ec0252b10380c79f3593c   295M   296G   295M  legacy
tank/incus/virtual-machines                                                           24K   296G    24K  legacy
tank/test                                                                             24K   535G    24K  /tank/test

Yes, that would have been tricky.

Excellent!