Out of curiosity I tried closing the lid, but it suspends the laptop. Disabling that (HandleLidSwitch=ignore in /etc/systemd/logind.conf IIRC) could be a workaround for switching off the screen, slightly suboptimal because of the worse thermal but well…
Ah yeah, we definitely can and should do that one ![]()
^ @gibmat want to send a quick PR for it?
Thanks, that was pretty interesting to see documented! I had gathered some parts of it from looking at the USB stick after formatting it with the raw .img, but getting this info was the exact reason I was looking for a way to do a lsblk on IncusOS (and incus admin os show resources doesn’t list the usage of each partition)!
If I may, I was initially just looking for a way to add the second drive to the default raid0 pool.
I’m really grateful you are both evaluating making changes to support the mirroring use case (which is more common and sensible for the reliability it provides), so on my side I’m currently investigating destroying/deleting the local pool to recreate it with both drives added, but it seems the backup and images volumes get in the way (at least I couldn’t delete them in the webUI). I’ll report if I find a way around that.
@gibmat I guess a reference doc page on the partitioning scheme would be good to add.
Incus technically gets provided with a dataset on the local pool (local/incus) so deleting the pool will only delete the dataset, it will not delete the zpool and so won’t let you re-use the partition.
It’s worth noting that we strongly recommend doing all storage configuration through IncusOS as doing so means that IncusOS will setup encryption for you on all the ZFS pools and chain that encryption key with the TPM. Incus itself doesn’t know how to create encrypted pools, so anything you create that way will not be secure.
There’s actually a check to prevent this: incus-os/incus-osd/internal/zfs/zfs.go at main · lxc/incus-os · GitHub
I think trying to load an unencrypted zpool will actually produce an error at startup, because IncusOS expects a raw encryption key to be present for each ZFS pool that it tries to import. And I don’t think we should change that to allow unencrypted ZFS pools since that will weaken the security of the system if data accidentally leaks from an encrypted pool to an unencrypted one.
(Sorry for the delay, I got hit by the “You wrote too many replies on your first day” limit)
Right, if I really wanted to recreate the zpool I would have to delete what’s underlying Incus’ root partition… Not knowing exactly how the system is laid out I thought the OS root might be in one of the 1 GB partition (in a separate zpool). So the initial zpool is made of this partition + all the extra/free space?
I wouldn’t even consider doing anything storage-related outside of Incus/IncusOS (and anyway I don’t think I could, could I?), I was planning to do all this through the API. That’s the goal after all, to exercise this system and see what are the current “limits”/design decisions and whether they should be extended or not ![]()
For me it will be more than enough to stop messing around with the default setup and just create a second pool with the second drive.
Sorry about that, we’re pretty regularly hit by waves of spam bots so are pretty actively relying on Discourse’s level system to relax restrictions as folks get more and more active while being quite aggressive on new accounts to avoid spam.
@gibmat added Partitioning scheme - IncusOS documentation yesterday evening which covers the general layout.
The local zpool is empty except for the incus dataset which we pass as the Incus local storage pool.
The advantage of doing things that way is that it’s possible for additional Incus storage pools to be created on the zpool, for example local/incus1 or something like that which could be used alongside projects to segment the volumes on the Incus side.
It also technically allows us to put more non-Incus data on it, though we’ve currently not had any need for that given we have our own 25GB of OS persistent storage we can use already (as ext4). The main reason we may have for wanting to use some of the local zpool for non-Incus stuff is if we want to use some of the ZFS features for that data (I’m thinking snapshot specifically).
No problem, it perfectly makes sense. Actually 17 replies was already pretty good, but if I had known I would have grouped them up a bit more ![]()
Oh the partitioning doc is really good, thanks! It brings clarity and answers questions I had and couldn’t verify myself. Nice!
Something I’m still confused about Is whether the root partition using ZFS itself, or ext4 or EROFS…?
/ is the encrypted ext4 but it doesn’t contain the OS, the OS is all in /usr which is erofs.
So the image as you download it only has the EFI partition and the three first usr partitions populated. All the other partitions get created on first boot, that’s when the / partition gets created, formatted and encrypted with the TPM.
Then the ZFS pool itself is created just before Incus gets installed and started on first boot.
@stgraber I’m not sure I understand: expending the local pool to another device isn’t allow but, is it not what it means in the docs when it says (Expanding the “local” storage pool - IncusOS documentation):
The second drive must be the same size as the main system drive
The pool capacity will be ~35GiB less than the size of the drives due to partitioning layout on the main system drive
We’ve recently added support for expanding the local storage pool to other devices.
Hi @stgraber
I seem to fail expanding the local pool with the second drive. Here’s the default config after install:
incus admin os system storage show
WARNING: The IncusOS API and configuration is subject to change
config:
scrub_schedule: 0 4 * * 0
state:
drives:
- boot: false
bus: nvme
capacity_in_bytes: 1.024209543168e+12
id: /dev/disk/by-id/nvme-PC611_NVMe_SK_hynix_1TB_CD07N8499102Y832V
model_family: ""
model_name: PC611 NVMe SK hynix 1TB
multipath: false
remote: false
removable: false
serial_number: CD07N8499102Y832V
smart:
available_spare: 100
data_units_read: 2.9844359e+07
data_units_written: 4.5520458e+07
enabled: true
passed: true
power_on_hours: 8604
- boot: true
bus: nvme
capacity_in_bytes: 1.024209543168e+12
id: /dev/disk/by-id/nvme-PC711_NVMe_SK_hynix_1TB__KNA8N42181070865T
member_pool: local
model_family: ""
model_name: PC711 NVMe SK hynix 1TB
multipath: false
remote: false
removable: false
serial_number: KNA8N42181070865T
smart:
available_spare: 100
data_units_read: 9.788769e+06
data_units_written: 1.5286339e+07
enabled: true
passed: true
power_on_hours: 5763
pools:
- devices:
- /dev/disk/by-id/nvme-PC711_NVMe_SK_hynix_1TB__KNA8N42181070865T-part11
encryption_key_status: available
name: local
pool_allocated_space_in_bytes: 4.722688e+06
raw_pool_size_in_bytes: 9.8784247808e+11
state: ONLINE
type: zfs-raid0
usable_pool_size_in_bytes: 9.8784247808e+11
volumes:
- name: incus
quota_in_bytes: 0
usage_in_bytes: 2.965504e+06
use: incus
I edit it with incus admin os system storage edit to look like this:
<snip>
pools:
- devices:
- /dev/disk/by-id/nvme-PC711_NVMe_SK_hynix_1TB__KNA8N42181070865T-part11
- /dev/disk/by-id/nvme-PC611_NVMe_SK_hynix_1TB_CD07N8499102Y832V
<snip>
To no avail, incus admin os system storage show still shows exactly the same thing.
I’ve also tried doing it like follows to match the first example given in the RAID0 docs:
pools:
- name: local
devices:
- /dev/disk/by-id/nvme-PC711_NVMe_SK_hynix_1TB__KNA8N42181070865T-part11
- /dev/disk/by-id/nvme-PC611_NVMe_SK_hynix_1TB_CD07N8499102Y832V
encryption_key_status: available
pool_allocated_space_in_bytes: 4.7104e+06
raw_pool_size_in_bytes: 9.8784247808e+11
state: ONLINE
type: zfs-raid0
usable_pool_size_in_bytes: 9.8784247808e+11
volumes:
- name: incus
quota_in_bytes: 0
usage_in_bytes: 2.965504e+06
use: incus
But no difference.
I also see nothing related in incus admin os debug log.
Any pointers?
Nevermind, I stumbled upon RAID-1 for boot drive - #17 by gibmat which put me on the right track.
The correct format for incus admin os system storage edit was (full excerpt):
config:
scrub_schedule: 0 4 * * 0
pools:
- name: local
devices:
- /dev/disk/by-id/nvme-PC711_NVMe_SK_hynix_1TB__KNA8N42181070865T-part11
- /dev/disk/by-id/nvme-PC611_NVMe_SK_hynix_1TB_CD07N8499102Y832V
type: zfs-raid0
Nothing more, nothing less.
scrub_schedule is needed, as well as being outside of a state key.
After that my second drive was instantly added to the pool and all works well ![]()
Would you be so kind as to update the documentation in Expanding the “local” storage pool - IncusOS documentation with this information whenever you have the time?
Thanks!
@gibmat ^