Edit: solved—see comments below.
I have attempted to configure Incus to use an existing, encrypted ZFS dataset as its default storage pool. I had configured ZFS to automatically unlock and mount both the zpool and the dataset on boot. After running incus admin init
and selecting the dataset to use for the default storage pool, the dataset no longer mounts on boot and it isn’t available for use by Incus. In addition, many Incus commands fail with [solved—see below]Error: The remote isn't a private server
which is confusing because I don’t think I’m telling it to use a remote.
System Information
Arch Linux
kernel version: LTS 6.6.36
ZFS version: 2.2.4-1
Incus version: 6.2
Setup Details
ZFS and Encryption:
- zpool called
data
and mounted at/data
- Encrypted ZFS dataset called
incus
and mounted at/data/incus
- The dataset automounts via
zfs-mount.service
- The key for the dataset is provided by a custom service:
zfs-load-key@.service
which takes as a parameter thezpool/dataset
to load the key for- In this case the parameter will be
data-incus
(escaped (%i
)) ordata/incus
(unescaped (%I
)) - Credential is accessed through systemd credentials available in
$CREDENTIALS_DIRECTORY
to the service as it runs - the credential is named
zfs-%i-key
(in this casezfs-data-incus-key
) - the service runs
zfs load-key -L file://${CREDENTIALS_DIRECTORY}/zfs-%i-key %I
- The service is configured
After
andRequires
zfs-import.target
- And
WantedBy
zfs-mount.service
- runs
zfs-loadkey -L file://${CREDENTIALS_DIRECTORY}/zfs-%i-key %I
- The original encryption setup used a keyfile in /tmp which is not available after a reboot, therefore the dataset automounting can only happen if the above service is working properly.
Incus:
- ran
incus admin init
- chose to use the existing ZFS dataset
data/incus
Full Preseed from `incus admin init`
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
name: incusbr0
type: ""
project: default
storage_pools:
- config:
source: data/incus
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
Full config from `incus admin init --dump`
config:
images.auto_update_interval: "0"
networks:
- config:
ipv4.address: redacted
ipv4.nat: "true"
ipv6.address: redacted
ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
project: default
storage_pools:
- config:
source: data/incus
volatile.initial_source: data/incus
zfs.pool_name: data/incus
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
projects:
- config:
features.images: "true"
features.networks: "true"
features.networks.zones: "true"
features.profiles: "true"
features.storage.buckets: "true"
features.storage.volumes: "true"
description: Default Incus project
name: default
Other notes:
- Before rebooting, I noticed that several sub-datasets were created under
data/incus
:buckets
,containers
,custom
and so on - Before
incus admin init
, thedata/incus
ZFS dataset had its mount point set to/data/incus
but after it islegacy
- All the sub-datasets have mountpoint
legacy
Tried so far:
- manually mounting the dataset with
zfs mount
- fails because the mountpoint is
legacy
, says to use genericmount
instead
- fails because the mountpoint is
- redoing
incus admin init
- can’t create a storage pool called
default
because it already exists
- can’t create a storage pool called
- deleting the
default
storage pool- have to add
--force-local
to avoidError: The remote isn't a private server
- Fails because the storage pool is currently in use
- have to add
Hypotheses:
- the
zfs-load-key
service runs after Incus tries to mount the dataset for the storage pool- so I need a different
WantedBy
, what could it be?
- so I need a different
- need to configure mounts (via
/etc/fstab
) for the datasets (to where?) - I should have added the storage pool manually outside of
incus admin init
- I could still recreate everything and do this but it’s not obvious that it will help and will be tedious
- Incus shouldn’t have set the mountpoints to
legacy
and I should set them back to what they were before
How can I configure Incus to auto-mount an encrypted ZFS dataset for its default storage pool?