Bit of confusion around SR-IOV acceleration

So playing around with my 3 node setup I am a bit confused as to how the SR-IOV acceleration works with Incus… I have a CX6 Nic , and I have an ovn network setup (like very very basic setup following the cluster ovn guide here link) and I have been playing with Virtual Functions but switchdev is really twisting my brain since I can’t setup static IP’s on it.

Now, I understand that I have to use switchdev in order to get more of the HW accelerations that I would be interested in playing with… but how does incus handle this…

I took a look at this guide from the docs (LINK) and I followed it pretty easy.

Assuming that my CX6-DX card has 2 ports (eth1 and eth2), I just need to create 8 virtual functions on eth2 (as an example) and then unbind the virtual functions, put my Nic in switchdev mode, set offload settings, rebind my virtual functions and then enable ASAP^2 on the open-Vswitch and then restart the open-vswitch service.

If I add eth2 to the default OVN bridge (br-int) which is the only bridge I have, then do I setup Incus with br-int as the UPLINK or do I use eth2 or do I use the virtual funcitons?

How does Incus use the number of virtual functions I create ? does it use one per network? can I get isolation for each virtual function ?

I assume I have to provide an ovn ipv4 range when I make the network, what do I put for that ?

Any clarity on this would be GREATLY appreciated!

@stgraber any input on this ? I know you most likely wrote the docs on this , and im sorry to bother.

I’ve not played with this too recently. We have plans to add some systems in our lab to add daily tests of this stuff at which point we’ll have setup scripts to point at for anyone who wants to set up this kind of offloading.

But yes, you need to put the card into switchdev mode, make sure that both firmware and kernel drivers are configured to set up a bunch of VFs, then add the PF into the integration bridge and then enable hw-offload on the OpenVswitch.

What you should end up with this set up is a bunch of VFs that will be picked by Incus for your VMs or containers as well as a matching number of network interfaces that are representor ports.
Those will get put in the bridge and attached to the relevant virtual switch. The representor port will see the initial packet in any connection, this then will lead to flow rules being generated and offloaded to the card’s ASIC, at which point any further packet will not show on the representor but just directly flow to the VF in hardware.

It’d certainly be good to put together a modern tutorial on this forum for how to do that.
I do have hardware in our lab that can be used to test the basic set up, though we only have the one server with both ports on the ConnectX-6 card connected to each other at this point, making it a bit tricky to properly show this running.

As I said, we should have an expanded lab covering this in the near future though.

@stgraber
So the UPLINK would be the ovn integration bridge then ?

would it be possible to have the VF’s that are created, go to being used for an ovn network? As in the UPLINK would be the br-int bridge and lets say I create my-ovn network , would it use a whole VF for a network ? or would every container that is launched have a single VF?