I’m new to the Incus community and this is my first post here. I’m running IncusOS on bare metal hardware and would like to connect to a remote NVMe storage device.
So far, I’ve enabled the NVMe service, but I’m unsure about the next steps to actually connect and use the remote storage.
Could someone guide me on how to proceed?
incus admin os service show nvme
config:
enabled: true
targets:
- address: 192.168.0.83
port: 4220
transport: tcp
state:
host_id: 5df03ed6-6119-44bd-befe-613699c79ca2
host_nqn: nqn.2014-08.org.nvmexpress:uuid:03000200-0400-0500-0006-000700080009
Okay, so looks like nvme connect-all with that target didn’t cause anything to be detected.
You can try flipping the enabled setting to false and back to true to force another connect-all attempt.
You can also use incus admin os debug log to see if anything relevant is reported in the journal.
What’s that NVMe target? If it’s some kind of storage array, they often need allowing specific volumes to be made visible to specific clients based on NQN and HostID (that’s why those two values are visible).
This will make it a bit easier to get a new connection attempt by running incus admin os service reset nvme, basically saving you from having to flip the enabled flag.
ok, I’m progressing, I can now see the remote disk…
but while trying to create the pool:
incus storage create nvme lvmcluster
Error: Failed to run: vgcreate nvme /dev/disk/by-id/lvm-pv-uuid-Jy79X8-jcWj-lW2I-R9DM-LW4p-Jofj-UDb4h4 --shared: exit status 5 (Invalid host_id 0, use 1-2000 (sanlock_align_size is 8MiB).
Failed to initialize lock args for lock type sanlock)