Connecting to Remote NVMe Storage on IncusOS

Hi,

I’m new to the Incus community and this is my first post here. I’m running IncusOS on bare metal hardware and would like to connect to a remote NVMe storage device.

So far, I’ve enabled the NVMe service, but I’m unsure about the next steps to actually connect and use the remote storage.

Could someone guide me on how to proceed?

incus admin os service show nvme

config:
  enabled: true
  targets:
  - address: 192.168.0.83
    port: 4220
    transport: tcp
state:
  host_id: 5df03ed6-6119-44bd-befe-613699c79ca2
  host_nqn: nqn.2014-08.org.nvmexpress:uuid:03000200-0400-0500-0006-000700080009

Thanks!

Can you check incus admin system storage show to see if it now lists the remote drive(s)?

Here is the output

incus admin os system show storage

config: {}
state:
  drives:
  - boot: true
    bus: sat
    capacity_in_bytes: 2.56060514304e+11
    id: /dev/disk/by-id/ata-G537N1_256G_Y8XK002006T
    member_pool: local
    model_family: ""
    model_name: G537N1 256G
    remote: false
    removable: false
    serial_number: Y8XK002006T
    smart:
      enabled: true
      passed: true
  pools:
  - devices:
    - /dev/disk/by-id/ata-G537N1_256G_Y8XK002006T-part11
    encryption_key_status: available
    name: local
    pool_allocated_space_in_bytes: 4.460544e+06
    raw_pool_size_in_bytes: 2.19043332096e+11
    state: ONLINE
    type: zfs-raid0
    usable_pool_size_in_bytes: 2.19043332096e+11
    volumes:
    - name: incus
      quota_in_bytes: 0
      usage_in_bytes: 2.768896e+06
      use: '-'

Okay, so looks like nvme connect-all with that target didn’t cause anything to be detected.

You can try flipping the enabled setting to false and back to true to force another connect-all attempt.

You can also use incus admin os debug log to see if anything relevant is reported in the journal.

What’s that NVMe target? If it’s some kind of storage array, they often need allowing specific volumes to be made visible to specific clients based on NQN and HostID (that’s why those two values are visible).

I’m using nvmet/nvmet-tcp modules to enable remote access to the NVMe device on a ubuntu VM :slight_smile:.

incus admin os debug log

[2025/12/01 16:59:39 CET] kernel: nvme nvme0: failed to connect socket: -111

Indeed, that’s a ECONNREFUSED issue…

Investingating…

Thanks in the meantime :wink:

I’ve just pushed this small addition:

This will make it a bit easier to get a new connection attempt by running incus admin os service reset nvme, basically saving you from having to flip the enabled flag.

1 Like

Hi,

I did a test with 2 VMs:

  • vm0 is the target exposing a nvme drive.
  • vm1 is the initiator.

vm1 is able to discover:

root@vm1:~# nvme discover --transport=tcp --traddr=192.168.0.67  --trsvcid=4420

Discovery Log Number of Records 2, Generation counter 3

=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: current discovery subsystem
treq:    not specified, sq flow control disable supported
portid:  1
trsvcid: 4420
subnqn:  nqn.2014-08.org.nvmexpress.discovery
traddr:  192.168.0.67
eflags:  none
sectype: none

=====Discovery Log Entry 1======
trtype:  tcp
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified, sq flow control disable supported
portid:  1
trsvcid: 4420
subnqn:  nqn.2025-12.ubuntu-nvmeotcp-poc-target
traddr:  192.168.0.67
eflags:  none
sectype: none

and vm1 is able to connect like this:

nvme connect --transport=tcp --traddr=192.168.0.67 --trsvcid=4420 -n nqn.2025-12.ubuntu-nvmeotcp-poc-target


However, in incus with the following configuration, I still can see connection refused in logs.

config:
  enabled: true
  targets:
  - address: 192.168.0.67
    port: 4220
    transport: tcp
incus admin os debug log
[2025/12/02 17:46:31 CET] kernel: nvme nvme0: failed to connect socket: -111

How can I be sure that incusOS is trying to connect to this nqn “nqn.2025-12.ubuntu-nvmeotcp-poc-target” ?

Is it maybe the iussue?

Jeff.

The logic can be found here incus-os/incus-osd/internal/services/service_nvme.go at main · lxc/incus-os · GitHub

Basically IncusOS will generate the /etc/nvme/discovery.conf using:

--transport=tcp --traddr=192.168.0.67 --trsvcid=4220

And then call nvme connect-all.

Do note that above you’re using 4420 on vm1 but you’re using 4220 in the IncusOS config.

1 Like

Holy moly, typo strike again.

Thanks for letting me know :face_holding_back_tears:

Hopefully the problem was that simple and the connection refused error was just that :wink:

ok, I’m progressing, I can now see the remote disk…

but while trying to create the pool:

incus storage create nvme lvmcluster

Error: Failed to run: vgcreate nvme /dev/disk/by-id/lvm-pv-uuid-Jy79X8-jcWj-lW2I-R9DM-LW4p-Jofj-UDb4h4 --shared: exit status 5 (Invalid host_id 0, use 1-2000 (sanlock_align_size is 8MiB).
  Failed to initialize lock args for lock type sanlock)

The target has lvmlockd and sanlock configured

/etc/lvm/lvm.conf

global {
  use_lvmlockd = 1
  lvmlockd_lock_retries = 3
  system_id_source = “lvmlocal”
}

/etc/lvm/lvmlocal.conf

local {
  system_id = “vm0”
  host_id = 1
}

any idea?

Did you configure LVM with incus admin os service edit lvm?

That will let you assign the host_id for the machine and turn on the locking bits for clustered LVM.

You need the system_id and host_id to be unique among all machines within the cluster.

Okay, thanks for this.

After setting system_id to 1 in LVM, it works!

config:
  enabled: true
  system_id: 1

1 Like