How to extend storage with a second nvme drive (zfs-raid0)

Hi,

I have a server with 2 nvme drives, I installed the system on the smallest and wanted to either extended the zfs pool “local” with the second one, or create a second pool, whatever works would be good enough for my need.

Unfortunately, I can’t get any to work.

$ incus admin os system storage show
WARNING: The IncusOS API and configuration is subject to change

config:
  scrub_schedule: 0 4 * * 0
state:
  drives:
  - boot: true
    bus: nvme
    capacity_in_bytes: 2.56060514304e+11
    id: /dev/disk/by-id/nvme-INTEL_SSDPEKKF256G8L_PHHP940505N5256B
    member_pool: local
    model_family: ""
    model_name: INTEL SSDPEKKF256G8L
    multipath: false
    remote: false
    removable: false
    serial_number: PHHP940505N5256B
    smart:
      available_spare: 100
      data_units_read: 3.8664628e+07
      data_units_written: 2.4496491e+07
      enabled: true
      passed: true
      percentage_used: 7
      power_on_hours: 2748
  - boot: false
    bus: nvme
    capacity_in_bytes: 1.000204886016e+12
    id: /dev/disk/by-id/nvme-WDS100T1X0E-00AFY0_21469H467609
    model_family: ""
    model_name: WDS100T1X0E-00AFY0
    multipath: false
    remote: false
    removable: false
    serial_number: 21469H467609
    smart:
      available_spare: 100
      data_units_read: 4.9276713e+07
      data_units_written: 5.3905075e+07
      enabled: true
      passed: true
      percentage_used: 1
      power_on_hours: 6879
  pools:
  - devices:
    - /dev/disk/by-id/nvme-INTEL_SSDPEKKF256G8L_PHHP940505N5256B-part11
    encryption_key_status: available
    name: local
    pool_allocated_space_in_bytes: 4.734976e+06
    raw_pool_size_in_bytes: 2.19043332096e+11
    state: ONLINE
    type: zfs-raid0
    usable_pool_size_in_bytes: 2.19043332096e+11
    volumes:
    - name: incus
      quota_in_bytes: 0
      usage_in_bytes: 2.965504e+06
      use: incus

I edited the pool and added this line, without effect

- /dev/disk/by-id/nvme-WDS100T1X0E-00AFY0_21469H467609

Then I edited the pool and added these lines to create a new pool:

    - name: my-pool
      type: zfs-raid0
      devices:
      - /dev/disk/by-id/nvme-WDS100T1X0E-00AFY0_21469H467609

This is without effect too?

As it didn’t work, I tried to wipe the drive as explained in https://linuxcontainers.org/incus-os/docs/main/reference/system/storage/#wiping-a-drive , I don’t know if it was of any effect

My incus client version

incus --version
6.19.1

Help would be appreciated to understand what is happening ^^

edit : no more luck with client 6.23

The state is always read-only.

To add a new pool, you need to edit the config part and do something like:

config:
  pools:
    - name: my-pool
      type: zfs-raid0
      devices:
      - /dev/disk/by-id/nvme-WDS100T1X0E-00AFY0_21469H467609
1 Like

It seems I got it to work somehow? This doesn’t feel natural to me but when editing the configuration, I had to delete all extra informations in the yaml structure to looks like this

config:
  scrub_schedule: 0 4 * * 0
  pools:
  - devices:
    - /dev/disk/by-id/nvme-INTEL_SSDPEKKF256G8L_PHHP940505N5256B-part11
    - /dev/disk/by-id/nvme-WDS100T1X0E-00AFY0_21469H467609
    name: local
    state: ONLINE
    type: zfs-raid0

You didn’t need to remove any of the state stuff, it just won’t do anything if left there.

But indeed, to grow your existing pool, you’d want to do what you did above (note that the state field within that pool definition is not part of a valid configuration and will be ignored).

In previous attempt, I used incus admin os system storage edit and only added this line at the right place:

    - /dev/disk/by-id/nvme-WDS100T1X0E-00AFY0_21469H467609

I don’t really understand why it didn’t work as you say all extra information should be discarded?

I think I had a similar problem when editing the network to add a new interface to allow to attach containers to the LAN. So that pattern seems consistent.

incus admin os system storage edit wouldn’t have shown the pool under the config section, only under the state section and as mentioned, the state is read-only, so your change was ignored.

1 Like

Oh, I got it, the pool was under state: in the YAML instead of config:, it makes sense now!

Thank you!