I had tried to set up a converged Incus and LINSTOR cluster on some SFF computers, and found that a 1 Gbps network connection on each node was not enough.
I’ve added a dedicated 10 Gbps connection between each node. Is there a way I can tell LINSTOR (especially) to use/prefer those much faster connections?
tl;dr
Due to a current lack of switch ports each SFF node has a 2-port card and each node is directly connected to the other two. I’ve set up the networking for that (Ubuntu netplan) and can successfully ping and so on (via IP addresses only).
The LINSTOR cluster should be up and running, as I can use the controller on one node to do other things. Incus is currently standalone on the same node as the LINSTOR controller.
Every time I try to use commands that I’ve found for that (for the “linstor” CLI), I end up with a “whitelisted” response or an ERROR. I’ve also tried using some “incus” commands but also can’t get that to work in this case, either.
Examples:
linstor CLI
$ linstor resource-group set-property name-of-lsp DrbdOptions/Net/preferred-net-interface storage
ERROR:
Description:
Invalid property key: DrbdOptions/Net/preferred-net-interface
Cause:
The key 'DrbdOptions/Net/preferred-net-interface' is not whitelisted.
Details:
Resource group: tank-bulk-lsp
Show reports:
linstor error-reports show 69D11BF5-00000-000003
incus CLI
$ incus storage set linstor-storage drbd.preferred-net-interface=storage
Error: Invalid option “drbd.preferred-net-interface”
$ incus storage set linstor-storage linstor.rg.DrbdOptions/Net/preferred-net-interface=storage
Error: Invalid option “linstor.rg.DrbdOptions/Net/preferred-net-interface”
(Most of these have been suggestions from Gemini, fwiw. Gemini has been reasonable with other commands but it often takes several attempts for Incus or LINSTOR commands.)
Ultimately, I’ll also want to get Incus to use the 10 Gbps links for any backend operations. Each node still has the 1 Gbps link for upstream to connect with the rest of the network.