Can i push this any further?

So i have 3 diskless compute nodes (which are my incus cluster) attached to 3 diskful Linstor nodes, andI’ve managed to setup RDMA on the drbd replication layer. The computes all use dual port ConnectX 3 Pro nic’s (except for one of them that uses a ConnectX 6-DX) , the diskful drbd nodes all have CX3-Pro cards and have a replication of 3 (one copy on each of the nodes)

.

I setup 4 nvme on each diskful node and im using them as incus storage backends. I was curious to see just how fast the network was and so I decided to run a test .

If i create 2 containers on the cluster, and then run iperf3 between them I should be essentially testing the network and disk right? (since i would be running read/write on the drbd disk ?)

anyways here’s my iperf3 results i ran with 15 parallel streams for 15 seconds … I’d love any input on this.

Am i correct to read this right? 1 TeraByte of data pushed in 15 seconds ?

Doesn’t iperf only use RAM unless you run some client command? (I’ve never run a disk test over network so sorry if its obvious your doing that from the screenshot) - does look fast though!

I ran “iperf3 -s” on one container and the “iperf3 -c <ip of container 1> -P 15 -Z -t 15 “ on container 2..

Im assuming this is somewhat like a disk test because the root disk of the containers are drbd disks over rdma .. am i correct to assume this logic ?

Each of the container was on a seperate target node in incus.. but on the same ovn network

From https://iperf.fr/

image

It appears your just testing the network, think you need to specify some paths to test disk.

I really am no benchmarking expert, but it appears your just testing the network, not actually writing anything to a disk.

Interesting… well how does the network look ?

Does amyone have any tests they would suggest i run to run more meaningful insights ?

Looks fast, but the screenshot I sent included an ipef command with a test file or just copy 1TB-N of real data and report back?

im gonna do that now