Are VM bridges expected to be slow?

I have an LXD container and an LXD VM running on a 12 core xeon with 128mb RAM.

All machines are Debian 11. Both the Container and the VM have their own bridge network to the host.

Running iperf3 as a client on the host and as server on the container/vm, I get around 25Gbit to the container, and only 5Gbit or so to the VM.

Connecting from another machine over 1G ethernet, I get line speed to the container, but only 80mbit to the VM.

Is this expected behaviour? I have checked the network configuration and everything seems identical, and there is no appreciable CPU or memory load. I am at a loss as to what to check next…

If you are using LXD 5.14, it could be due to a bug in how vhost-net acceleration was done that could result in 100% CPU being consumed while processing network packets. This was reported to us in Virtual machine bridge network fails after about 10 seconds on LXD 5.14

Can you check with latest/edge snap channel if running lxd 5.14

I am on 5.14. I updated to latest/edge, but it looks like I might still be having problems:

root@workstation:~# lxc version
Client version: 5.14
Server version: 5.14

root@workstation:~# snap refresh lxd --channel latest/edge

This made the Host-> VM loopback almost as fast as the Host->Container loopback, but perhaps both slower than I would have expected:

HOST TO CONTAINER:

root@workstation:~# iperf3 -P 3 -c 10.4.1.2 
Connecting to host 10.4.1.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.04  sec  29.6 GBytes  25.3 Gbits/sec                  receiver

HOST TO VM:

root@workstation:~# iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.04  sec  27.1 GBytes  23.2 Gbits/sec                  receiver

I tried from a Windows machine over a 40Gbit Ethernet link - the container was a little slower than the host, the VM was a lot slower:

40GbE WINDOWS TO HOST:

C:\Program Files\iperf3>iperf3 -P 3 -c 10.3.1.1
Connecting to host 10.3.1.1, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.00  sec  44.0 GBytes  37.8 Gbits/sec                  receiver


40Gbe WINDOWS TO CONTAINER:

C:\Program Files\iperf3>iperf3 -P 3 -c 10.4.1.2
Connecting to host 10.4.1.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.00  sec  35.8 GBytes  30.7 Gbits/sec                  receiver


40GbE TO VM:

C:\Program Files\iperf3>iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.00  sec  18.7 GBytes  16.1 Gbits/sec                  receiver

I tried from a Mac over 1GbE - Host ran at line speed, container macvlan was near line speed, but both container and VM bridges were terrible:

MAC 1GbE TO HOST:

bash-3.2# iperf3 -P 3 -c 10.0.0.12
Connecting to host 10.0.0.12, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   6.00-6.36   sec  43.5 MBytes  1.00 Gbits/sec


MAC 1GbE TO CONTAINER MACVLAN:

bash-3.2# iperf3 -P 3 -c 10.0.0.20
Connecting to host 10.0.0.20, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.00  sec  1.03 GBytes   889 Mbits/sec                  receiver


MAC 1GbE TO CONTAINER BRIDGE:

bash-3.2# iperf3 -P 3 -c 10.4.1.2
Connecting to host 10.4.1.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.03  sec  96.9 MBytes  81.1 Mbits/sec                  receiver


MAC 1GbE TO VM BRIDGE:

bash-3.2# iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.03  sec   102 MBytes  85.4 Mbits/sec                  receiver

I got even worse vm performance compared to host from the same Mac over a Thunderbolt bridge, and was not able to connect to or ping the container(???):

networksetup -setadditionalroutes "Thunderbolt Bridge" 10.4.0.0 255.255.0.0 10.5.0.1

MAC IPoT TO HOST:

bash-3.2# iperf3 -P 3 -c 10.5.0.1
Connecting to host 10.5.0.1, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.00  sec  8.85 GBytes  7.60 Gbits/sec                  receiver


MAC IPoT TO VM:

bash-3.2# iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.00  sec   221 KBytes   181 Kbits/sec                  receiver

Again, there was no appreciable load on CPU or memory on either the host of the VM.

MTU on the 40GbE link is 9000, and 1500 everywhere else.

Network adapters are:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp9s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:76:8a:57:c8:90 brd ff:ff:ff:ff:ff:ff
3: enp9s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:76:8a:57:c8:91 brd ff:ff:ff:ff:ff:ff
4: enp9s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:76:8a:57:c8:92 brd ff:ff:ff:ff:ff:ff
5: enp9s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:76:8a:57:c8:93 brd ff:ff:ff:ff:ff:ff
6: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 94:57:a5:f2:54:44 brd ff:ff:ff:ff:ff:ff
    altname enp0s25
    inet 10.0.0.12/24 brd 10.0.0.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::9657:a5ff:fef2:5444/64 scope link 
       valid_lft forever preferred_lft forever
7: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
    link/ether 24:be:05:89:6c:30 brd ff:ff:ff:ff:ff:ff
    altname enp1s0
    inet 10.3.1.1/16 brd 10.3.255.255 scope global ens4
       valid_lft forever preferred_lft forever
    inet6 fe80::26be:5ff:fe89:6c30/64 scope link 
       valid_lft forever preferred_lft forever
8: ens4d1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq state DOWN group default qlen 1000
    link/ether 24:be:05:89:6c:31 brd ff:ff:ff:ff:ff:ff
    altname enp1s0d1
9: ccom-net: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:64:3f:90 brd ff:ff:ff:ff:ff:ff
    inet 10.4.4.1/24 scope global ccom-net
       valid_lft forever preferred_lft forever
    inet6 fd42:4d33:8762:6c77::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe64:3f90/64 scope link 
       valid_lft forever preferred_lft forever
10: samba-network: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:7c:23:d7 brd ff:ff:ff:ff:ff:ff
    inet 10.4.1.1/24 scope global samba-network
       valid_lft forever preferred_lft forever
    inet6 fd42:f935:903f:b9dd::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe7c:23d7/64 scope link 
       valid_lft forever preferred_lft forever
12: vethba30a152@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master samba-network state UP group default qlen 1000
    link/ether 42:5f:65:d3:b1:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
14: practice-nic: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 16:9a:9d:1c:b2:0a brd ff:ff:ff:ff:ff:ff
    inet 10.4.3.1/24 brd 10.4.3.255 scope global practice-nic
       valid_lft forever preferred_lft forever
    inet6 fe80::149a:9dff:fe1c:b20a/64 scope link 
       valid_lft forever preferred_lft forever
16: thunderbolt0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 02:f4:92:18:0d:96 brd ff:ff:ff:ff:ff:ff
    inet 10.5.0.1/24 brd 10.5.0.255 scope global thunderbolt0
       valid_lft forever preferred_lft forever
    inet6 fe80::f4:92ff:fe18:d96/64 scope link 
       valid_lft forever preferred_lft forever
17: tap45a7970b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ccom-net state UP group default qlen 1000
    link/ether 9e:dc:59:7e:b1:74 brd ff:ff:ff:ff:ff:ff

Container config is:

architecture: x86_64
config:
  boot.autostart: "false"
  image.architecture: amd64
  image.description: Debian bullseye amd64 (20221018_05:25)
  image.os: Debian
  image.release: bullseye
  image.serial: "20221018_05:25"
  image.type: squashfs
  image.variant: default
  volatile.base_image: d7874535267d24189305329233e74e54e2e9dd0ceedbffc87c623737a7ccf457
  volatile.bridge-device.host_name: vethba30a152
  volatile.bridge-device.hwaddr: 00:16:3e:b5:3f:01
  volatile.cloud-init.instance-id: 0f4d5a16-965b-44bc-97d4-11a055b3f6c9
  volatile.eth1.host_name: macf0610ae0
  volatile.eth1.hwaddr: 00:16:3e:de:df:9e
  volatile.eth1.last_state.created: "false"
  volatile.eth1.name: eth1
  volatile.last_state.power: RUNNING
  volatile.uuid: afc10fb3-98cd-419d-8a3f-356e2a104e78
  volatile.uuid.generation: afc10fb3-98cd-419d-8a3f-356e2a104e78
devices:
  bridge-device:
    name: eth0
    network: samba-network
    type: nic
  eth1:
    nictype: macvlan
    parent: eno1
    type: nic
ephemeral: false
profiles:
- nobridge
stateful: false
description: ""

VM config is:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian bullseye amd64 (20230429_05:24)
  image.os: Debian
  image.release: bullseye
  image.serial: "20230429_05:24"
  image.type: disk-kvm.img
  image.variant: default
  volatile.base_image: ec908128fd9b11ac9696af68dd8b506d943c6642981687020152bb1273ce1f9a
  volatile.bridge-device.host_name: tap45a7970b
  volatile.bridge-device.hwaddr: 00:16:3e:55:51:34
  volatile.cloud-init.instance-id: 7ed88e68-6ff5-4fa5-809d-1f9d380f998e
  volatile.last_state.power: RUNNING
  volatile.uuid: 1eee4378-432d-4e09-8268-b93feba2e28a
  volatile.uuid.generation: 1eee4378-432d-4e09-8268-b93feba2e28a
  volatile.vsock_id: "8"
devices:
  bridge-device:
    name: eth0
    network: ccom-net
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""

NFTables are:

root@workstation:~# nft list tables
table inet filter
table inet lxd

root@workstation:~# nft list table inet filter
table inet filter {
        chain input {
                type filter hook input priority filter; policy accept;
        }

        chain forward {
                type filter hook forward priority filter; policy accept;
                ip daddr 10.0.0.1 accept
                ip saddr 10.4.0.0/16 ip daddr 10.0.0.0/8 ct state { established, related } counter packets 1163661 bytes 48154610 accept
                ip saddr 10.4.0.0/16 ip daddr 10.0.0.0/8 counter packets 1 bytes 40 drop
        }

        chain output {
                type filter hook output priority filter; policy accept;
        }
}

root@workstation:~# nft list table inet lxd   
table inet lxd {
        chain pstrt.ccom-net {
                type nat hook postrouting priority srcnat; policy accept;
                ip6 saddr fd42:4d33:8762:6c77::/64 ip6 daddr != fd42:4d33:8762:6c77::/64 masquerade
        }

        chain fwd.ccom-net {
                type filter hook forward priority filter; policy accept;
                ip6 version 6 oifname "ccom-net" accept
                ip6 version 6 iifname "ccom-net" accept
        }

        chain in.ccom-net {
                type filter hook input priority filter; policy accept;
                iifname "ccom-net" tcp dport 53 accept
                iifname "ccom-net" udp dport 53 accept
                iifname "ccom-net" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
                iifname "ccom-net" udp dport 547 accept
        }

        chain out.ccom-net {
                type filter hook output priority filter; policy accept;
                oifname "ccom-net" tcp sport 53 accept
                oifname "ccom-net" udp sport 53 accept
                oifname "ccom-net" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
                oifname "ccom-net" udp sport 547 accept
        }

        chain pstrt.samba-network {
                type nat hook postrouting priority srcnat; policy accept;
                ip6 saddr fd42:f935:903f:b9dd::/64 ip6 daddr != fd42:f935:903f:b9dd::/64 masquerade
        }

        chain fwd.samba-network {
                type filter hook forward priority filter; policy accept;
                ip6 version 6 oifname "samba-network" accept
                ip6 version 6 iifname "samba-network" accept
        }

        chain in.samba-network {
                type filter hook input priority filter; policy accept;
                iifname "samba-network" tcp dport 53 accept
                iifname "samba-network" udp dport 53 accept
                iifname "samba-network" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
                iifname "samba-network" udp dport 547 accept
        }

        chain out.samba-network {
                type filter hook output priority filter; policy accept;
                oifname "samba-network" tcp sport 53 accept
                oifname "samba-network" udp sport 53 accept
                oifname "samba-network" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
                oifname "samba-network" udp sport 547 accept
        }
}

CPU is:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
stepping        : 2
microcode       : 0x49
cpu MHz         : 1197.502
cache size      : 30720 KB
physical id     : 0
siblings        : 24
core id         : 0
cpu cores       : 12
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 15
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts md_clear flush_l1d
vmx flags       : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_stale_data
bogomips        : 5187.92
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

Does increasing the CPU count in the VM help, using lxc config set <instance> limits.cpu=<count> as that will add additional NIC queues. You should also explore the thread options of iperf to take advantage of it.

I just tried an Ubuntu 22.04 VM running on an Ubuntu 22.04 host connected with a private managed lxdbr0 bridge.

I have an iperf server running on my local router host (192.168.1.2) over wired gigabit network and saw the same speeds from iperf on host as VM:

VM (with 1GiB RAM and one CPU core):

root@v1:~# iperf3 -c 192.168.1.2
Connecting to host 192.168.1.2, port 5201
[  5] local 10.21.203.2 port 47830 connected to 192.168.1.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   110 MBytes   925 Mbits/sec    0   3.13 MBytes       
[  5]   1.00-2.00   sec   106 MBytes   891 Mbits/sec    0   3.13 MBytes       
[  5]   2.00-3.00   sec   108 MBytes   899 Mbits/sec    0   3.13 MBytes       
[  5]   3.00-4.00   sec   106 MBytes   894 Mbits/sec    0   3.13 MBytes       
[  5]   4.00-5.00   sec   108 MBytes   902 Mbits/sec    0   3.13 MBytes       
[  5]   5.00-6.00   sec   108 MBytes   902 Mbits/sec    0   3.13 MBytes       
[  5]   6.00-7.00   sec   108 MBytes   902 Mbits/sec    0   3.13 MBytes       
[  5]   7.00-8.00   sec   106 MBytes   891 Mbits/sec    0   3.13 MBytes       
[  5]   8.00-9.00   sec   108 MBytes   902 Mbits/sec    0   3.13 MBytes       
[  5]   9.00-10.00  sec   108 MBytes   902 Mbits/sec    0   3.13 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.05 GBytes   901 Mbits/sec    0             sender
[  5]   0.00-10.07  sec  1.05 GBytes   894 Mbits/sec                  receiver

Host:

iperf3 -c 192.168.1.2
Connecting to host 192.168.1.2, port 5201
[  5] local 192.168.1.116 port 35268 connected to 192.168.1.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   108 MBytes   906 Mbits/sec    0    223 KBytes       
[  5]   1.00-2.00   sec   107 MBytes   897 Mbits/sec    0    223 KBytes       
[  5]   2.00-3.00   sec   107 MBytes   896 Mbits/sec    0    223 KBytes       
[  5]   3.00-4.00   sec   107 MBytes   897 Mbits/sec    0    223 KBytes       
[  5]   4.00-5.00   sec   107 MBytes   896 Mbits/sec    0    235 KBytes       
[  5]   5.00-6.00   sec   107 MBytes   896 Mbits/sec    0    235 KBytes       
[  5]   6.00-7.00   sec   107 MBytes   899 Mbits/sec    0    235 KBytes       
[  5]   7.00-8.00   sec   107 MBytes   896 Mbits/sec    0    235 KBytes       
[  5]   8.00-9.00   sec   107 MBytes   896 Mbits/sec    0    235 KBytes       
[  5]   9.00-10.00  sec   107 MBytes   899 Mbits/sec    0    235 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.05 GBytes   898 Mbits/sec    0             sender
[  5]   0.00-10.04  sec  1.04 GBytes   893 Mbits/sec                  receiver

Then I tried VM to host iperf server via same bridge:

root@v1:~# iperf3 -c 10.21.203.1
Connecting to host 10.21.203.1, port 5201
[  5] local 10.21.203.2 port 55622 connected to 10.21.203.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.30 GBytes  37.0 Gbits/sec    0   2.62 MBytes       
[  5]   1.00-2.00   sec  4.44 GBytes  38.1 Gbits/sec    0   2.92 MBytes       
[  5]   2.00-3.00   sec  3.23 GBytes  27.8 Gbits/sec    0   3.07 MBytes       
[  5]   3.00-4.00   sec  4.35 GBytes  37.4 Gbits/sec    0   3.07 MBytes       
[  5]   4.00-5.00   sec  4.14 GBytes  35.6 Gbits/sec    0   3.07 MBytes       
[  5]   5.00-6.00   sec  4.22 GBytes  36.3 Gbits/sec    0   3.07 MBytes       
[  5]   6.00-7.00   sec  3.12 GBytes  26.8 Gbits/sec    0   3.07 MBytes       
[  5]   7.00-8.00   sec  2.89 GBytes  24.8 Gbits/sec    0   3.07 MBytes       
[  5]   8.00-9.00   sec  3.08 GBytes  26.5 Gbits/sec    0   3.07 MBytes       
[  5]   9.00-10.00  sec  1.83 GBytes  15.7 Gbits/sec    0   3.07 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  35.6 GBytes  30.6 Gbits/sec    0             sender
[  5]   0.00-10.04  sec  35.6 GBytes  30.5 Gbits/sec                  receiver

And finally tried VM to VM iperf:

root@v1:~# iperf3 -c 10.21.203.8
Connecting to host 10.21.203.8, port 5201
[  5] local 10.21.203.2 port 58216 connected to 10.21.203.8 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  2.32 GBytes  19.9 Gbits/sec    0   3.00 MBytes       
[  5]   1.00-2.00   sec  1.86 GBytes  15.9 Gbits/sec    0   3.00 MBytes       
[  5]   2.00-3.00   sec  1.89 GBytes  16.3 Gbits/sec    1   3.00 MBytes       
[  5]   3.00-4.00   sec  2.14 GBytes  18.4 Gbits/sec    0   3.00 MBytes       
[  5]   4.00-5.00   sec  2.01 GBytes  17.3 Gbits/sec    0   3.00 MBytes       
[  5]   5.00-6.00   sec  2.39 GBytes  20.5 Gbits/sec    1   3.00 MBytes       
[  5]   6.00-7.00   sec  1.54 GBytes  13.3 Gbits/sec    0   3.16 MBytes       
[  5]   7.00-8.00   sec  2.25 GBytes  19.3 Gbits/sec    0   3.16 MBytes       
[  5]   8.00-9.00   sec  2.12 GBytes  18.2 Gbits/sec    0   3.16 MBytes       
[  5]   9.00-10.00  sec  1.77 GBytes  15.2 Gbits/sec    1   3.16 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  20.3 GBytes  17.4 Gbits/sec    3             sender
[  5]   0.00-10.04  sec  20.3 GBytes  17.3 Gbits/sec                  receiver

So there doesn’t appear to be any general issue here (I would expect VMs to be a bit slower than containers if not using hardware acceleration because of the additional overhead involved). But some of your examples are clearly problematic. Perhaps there is an issue with the MTU mixing going on.

Thanks for looking into this, and sorry for taking up your time with what seems to be some sort of routing problem inside my host machine. I have seen the same issue on a VM with 12vCPUs, so I don’t think it’s that though.

I just tried LXD 5.0.2 (which doesn’t have the vhost-net CPU offloading feature enabled) compared with latest/edge which does, and on my lowly Intel J5040 test machine I saw a 2Gbps uplift for VM->VM performance. So doesn’t seem to be any issues with vhost-net being enabled.

1 Like