Help planning LXD router

Have you considered Mikrotik CHR?

https://help.mikrotik.com/docs/pages/viewpage.action?pageId=18350234

It runs their full RouterOS and should be able to be installed in a virtual machine.

There’s a free option for testing which limits the interfaces to 1Mbps and to licence it to 1Gbps costs $45

I haven’t run it in an LXD vm but it should work fine.

Or, buy one of these and just use the server as a server.

Not really interested in this solution.

I’m interested in using containers (not VM) and a free open source linux based solution with a web interface (OpenWRT).

Hi @victoitor @m1cha , hope I am not too late to the party. I am testing exactly the same setup as you guys - running openwrt in lxd on ubuntu on ARM board. I am so excited to get everything running, but frustrated when see the iperf results. Hereby my setup:

IPERF SERVER - N100 x86 with 4x 2.5GbE
On the ARM board, I run Armbian with Jammy kernel, Openwrt 23.05.2 in LXD container, WAN port is macvlan to eth0, LAN port is bridge to eth1.

iperf3 results:
BOARD1 - RK3568 SoC with 2x 2.5GbE

  • Ubuntu (host) ↔ Iperf Server: Up and Down close to ~2.5Gbps
  • Openwrt(lxd) ↔ Iperf Server: Up ~2Gbps Down ~2.3Gbps

So far so good, but:

  • LAN PC with 2.5GbE NIC (bridge connected to Openwrt br-lan)<-> Iperf Server: Up ~600Mbps only, Down ~1000Mbps only

Why the performance drops so much? (more than 50% drop)
Due to NAT? So I created a Debian container and virtual bridge it to same LAN br-lan and do more iperf3:

  • Debian container (lxd, virtual bridge connected to Openwrt br-lan) ↔ Iperf Server: Up and Down close to ~2Gbps
  • LAN PC with 2.5GbE NIC ↔ Debian: Up and Down range between1.5~2Gbps

So my gut feeling is that the NAT performance from Openwrt in LXD is far from the best. But why the Debian container does much better than the LAN PC? i guess it’s not “actual” NAT but “virtual” NAT so the LXD handles much faster.

To verify, I run the same test on another ARM board:
BOARD2 - RK3528 SoC with 2x 1GbE

  • LAN PC with 2.5GbE NIC ↔ Iperf Server: Up ~500Mbps only, Down ~690Mbps only

Would you mind sharing your insights and what are your iperf3 test results like from your setup? Thank you!

I am also guessing the LXD-Openwrt NAT performance heavily depends on the single core performance of the SoC. When I run iperf3 and see the htop, only 1core is 100% and all other cores ~30%.

R4S has RK3399 and my board R66S has RK3568, from this Chinese link the single core performance of RK3399 is much better:

Geekbench 4 - Multi-core and single-core scores - Android 64-bit: RK3568 multi-core reaches 2,375, single-core 875, RK3399 multi-core reaches 2,775, single-core reaches 1,144. A 31% difference in single-core and a 17% difference in multi-core, with RK3399 winning.

Geekbench 5 - Multi-core and single-core scores - Android: RK3568 multi-core reaches 492, single-core 161, RK3399 multi-core reaches 615, single-core reaches 269. A 67% difference in single-core and a 25% difference in multi-core, with RK3399 winning.

I have been running OpenWRT on Incus for a while and I have never checked network performance, to tell you the truth. Just as I curiosity and for comparison, I have done that now.

My router server has a N5105 with Debian 12 running OpenWRT 23.05.2 under Incus (I have switched to Incus). My personal computer is a N6005 with Debian 12. They are connected through 1Gbps only, so I can’t measure much more than that.

I also have to mention that these tests will probably not be so great since my personal computer is connected to a Redmi AC2100 wifi router running OpenWRT which also has another connected server running internet services. Since everything goes through the same cable to the router server, they will interfere with each other.

Essentially, from my personal computer to a container in the router server I’m getting ~900 Mbps. Just as a test of total throughput, I ran a test from another container on the same host as OpenWRT and I got ~26Gbps, so OpenWRT and incus can certainly go high. My ethernet connection is a definite bottleneck there.

1 Like