Block device --type block vs zfs.block_mode

What is the difference between:

lxc storage volume create poolname volumename --type=block
lxc storage volume create poolname volumename zfs.block_mode=true
lxc storage volume create poolname volumename --type=block zfs.block_mode=true
  1. Creates a block custom volume, won’t be formatted by LXD, can currently only be exposed as a raw disk to a VM
  2. Creates a filesystem custom volume backed by a ZFS volume instead of a ZFS dataset, can be attached to containers or VMs, in case of VM shows up as a network share
  3. Same as 1) as all block custom volumes on ZFS will use a ZFS volume regardless of that config option

Thanks for clarification.
Still a bit confused by in part redundant ways of creating storage and attaching to instance.

Is the purpose of zfs.block_mode its ability to be preformatted, saving some steps of for example running docker inside container?

There is understandably different behavior in each storage creation and mount for containers and VM, which I often run into:

Container: Custom block volumes cannot be used on containers
VM: Custom block volumes cannot have a path defined

It would help seeing the goes and nogoes for Container/VM in a table view .

Docs mention that, but number of storage creation and mount options is growing and a cross reference of each related to Container/VM would be helpful.

The main purpose is to workaround cases where the zfs filesystem is causing issues with applications, running Docker is one of those cases, running an Android container is another.

Using Idmapped mounts on top of zfs is another use case (until Idmapped mounts are supported by zfs proper).

Ah yeah, true, I don’t have that limitation here :slight_smile: