Using encrypted disks in IncusOS

Is there a way to automatically unlock LUKS encrypted disks in IncusOS so that they could be used in applications such as Linstor and CEPH?

According to the documentation IncusOS encrypts the swap and system data partitions, and stores the encryption keys in the TPM. Is it possible to similarly store encryption keys for other disks? I was able to unlock a disk from a privilledged container, but I was not able to store the passphrase in TPM from within the container.

Not currently, but it shouldn’t be too bad to add to the storage API.

I think we’d effectively need:

  • New field in state to indicate whether a LUKS key is available for a given disk
  • Store LUKS keys alongside the ZFS pool keys on the root disk
  • Offer recovering of the LUKS key similar to what we do for ZFS
  • Automatically unlock LUKS on any disk that we have a key for
  • Expose an API to trigger luksFormat
  • Expose an API to provide LUKS key for an existing block device

We could in theory use the TPM to unlock those directly, but that’s actually quite a bit more complicated to get right and not really needed in this situation. It’s much easier for us to generate a random key, store it on the LUKS encrypted ext4 partition and use that to unlock the storage.

That’s the same daisy-chaining we do for ZFS, where it’s not itself unlocked by a TPM secret but instead relies on the root filesystem itself being unlocked through the TPM, therefore releasing the encryption key of the ZFS pool.

@gibmat any thoughts?

The extended API would certainly be welcomed. However, after thinking a while, I am not sure it is a direct responsibility of IncusOS to manage disks. In addtion to disk encryption, there could be other disk configurations such as cache tiering or software raid, also stacked upon one another. For example, we use LUKS encryption on top of Bcache devices. I think managing all such configurations using IncusOS API is unrealistic.

As mentioned, it is possible to unlock an encrypted disk from a privilledged container, so, as long as it is conceptually OK to “poke” inside IncusOS, one can use such temporary containers to manage storage devices and other hardware. I just tested and it appears that the linstor-satellite services are OK also if the disk for the storage volume becomes available after the service has started. Of course, one should probably start these management containers with a higher priority so that the remote storage is available before it is used by other containers.

I have a related question about the design of IncusOS applications. It looks like incus-linstor provides linstor-satellite services, but incus-ceph does not provide similar ceph-osd services, probably assuming that they should run in containers. Can one similarly install and run linstor-satellite services in containers, without using incus-linstor application at all? Running the satellite services in containers could be more reliable if the storage devices need to be initialized before the services have started. Also, it is easier to keep the versions of satellites and controllers in sync. Or is it planned to also add other services such as linstor-controller to the incus-linstor application to provide a complete solution?

incus-linstor is required even if the machine won’t provide any storage to the Linstor cluster. The satellite must run on any system that’s either providing storage to the Linstor cluster or is acting as a client for the Linstor cluster. That’s because the satellite is what’s responsible for setting up the various DRBD connections between the servers which are then used as the block device under Incus instances or volumes.

If Linstor had the same kind of separation as Ceph does, then we’d most likely have done the same and pushed the drives to be handled by containers instead.