IncusOS Network Speed

I’m experiencing a rather peculiar issue with network speed in a FreeBSD virtual machine and Linux containers running on IncusOS. The VM network interface and containers are tied to a physical interface via a bridge.

Download speed is capped to about 10-100 Mbps. The virtual machine was created with incus-migrate from a raw image.

To eliminate potential image issues, I’ve deployed the same image to a few different machines and hypervisors. I’ve also deployed the same Ubuntu and Debian container to other incus hosts.

  • Debian 13 + Zabbly kernel + Zabbly Incus 6.19.1 - full network speed
  • Bluefin (Fedora based bootc image) + Incus 6.18 - full network speed
  • Bluefin (Fedora based bootc image) + libvirt - full network speed
  • XCP-ng 8.3 LTS - full network speed

Containers and FreeBSD virtual machine achieve full gigabit on every other machine/platform. Only IncusOS is experiencing this. Sometimes, I’m able to achieve 250 Mbps but not for long. Most of the time, network speed is about 10-20 Mbps.

Any ideas?

Are you seeing the same behavior on a different VM running on the same IncusOS system?

Yes, Ubuntu VM, Ubuntu container, Debian container and FreeBSD VM.

Even booted from a Debian LiveCD to eliminate hardware issues and I’m able to get full gigabit and saturate the link.

Are the instances directly on the physical network or are they on the incusbr0 bridge?

Directly on the physical network. I did switch them to incusbr0 when testing and there was no change in network performance.

Anything interesting in incus admin os debug log or in incus admin os system network show?

Nothing unusual in the debug log, just a couple of audit entries.

Here’s the output of incus admin os system network show

config:
  dns:
    domain: ""
    hostname: borg
  interfaces:
  - addresses:
    - dhcp4
    - slaac
    hwaddr: 7c:4d:8f:2b:4c:3f
    name: enp2s0
    required_for_online: "no"
    roles:
    - management
    - instances
  time:
    timezone: UTC
state:
  interfaces:
    enp2s0:
      addresses:
      - 10.4.0.100
      - fd31:233a:fa83:ea4b:7e4d:8fff:fe2b:4c3f
      hwaddr: 7c:4d:8f:2b:4c:3f
      mtu: 1500
      roles:
      - management
      - instances
      - cluster
      routes:
      - to: default
        via: 10.4.1.1
      speed: "1000"
      state: routable
      stats:
        rx_bytes: 2.7623611e+07
        rx_errors: 0
        tx_bytes: 1.24810486e+08
        tx_errors: 0
      type: interface

Okay. What’s the vendor for that NIC? (incus admin os system resources show would have it if you don’t know).

I believe it’s Realtek.

network:
  cards:
  - driver: r8169
    driver_version: 6.17.10-zabbly+
    numa_node: 0
    pci_address: "0000:02:00.0"
    ports:
    - address: 7c:4d:8f:2b:4c:3f
      auto_negotiation: true
      id: _p7c4d8f2b4c3f
      link_detected: true
      link_duplex: full
      link_speed: 1000
      port: 0
      port_type: twisted pair
      protocol: ethernet
      supported_modes:
      - 10baseT/Half
      - 10baseT/Full
      - 100baseT/Half
      - 100baseT/Full
      - 1000baseT/Full
      supported_ports:
      - twisted pair
      - media-independent
      transceiver_type: external
    product_id: "8168"
    vendor_id: 10ec
  total: 1

What are you using for testing throughput?

Most my IncusOS boxes are on at least 10Gbps but I just tested network throughput on a random Thinkpad I’ve been using for testing on consumer-grade stuff and it’s behaving as expected:

root@foo:~# ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link-netnsid 0
    inet 172.17.0.233/24 metric 1024 brd 172.17.0.255 scope global dynamic eth0
       valid_lft 3568sec preferred_lft 3568sec
root@foo:~# iperf3 -c 172.17.0.1
Connecting to host 172.17.0.1, port 5201
[  5] local 172.17.0.233 port 37894 connected to 172.17.0.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   113 MBytes   948 Mbits/sec    0    447 KBytes       
[  5]   1.00-2.00   sec   112 MBytes   938 Mbits/sec    0    495 KBytes       
[  5]   2.00-3.00   sec   111 MBytes   932 Mbits/sec    0    495 KBytes       
[  5]   3.00-4.00   sec   111 MBytes   931 Mbits/sec    0    495 KBytes       
[  5]   4.00-5.00   sec   112 MBytes   940 Mbits/sec    0    520 KBytes       
[  5]   5.00-6.00   sec   111 MBytes   930 Mbits/sec    0    547 KBytes       
[  5]   6.00-7.00   sec   112 MBytes   937 Mbits/sec    0    731 KBytes       
[  5]   7.00-8.00   sec   111 MBytes   930 Mbits/sec    0    731 KBytes       
[  5]   8.00-9.00   sec   112 MBytes   935 Mbits/sec    0    731 KBytes       
[  5]   9.00-10.00  sec   111 MBytes   931 Mbits/sec    0    731 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec    0            sender
[  5]   0.00-10.01  sec  1.09 GBytes   933 Mbits/sec                  receiver

iperf Done.
root@foo:~# iperf3 -c 172.17.0.1 -R
Connecting to host 172.17.0.1, port 5201
Reverse mode, remote host 172.17.0.1 is sending
[  5] local 172.17.0.233 port 55404 connected to 172.17.0.1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   112 MBytes   940 Mbits/sec                  
[  5]   1.00-2.00   sec   112 MBytes   941 Mbits/sec                  
[  5]   2.00-3.00   sec   112 MBytes   942 Mbits/sec                  
[  5]   3.00-4.00   sec   112 MBytes   941 Mbits/sec                  
[  5]   4.00-5.00   sec   112 MBytes   942 Mbits/sec                  
[  5]   5.00-6.00   sec   112 MBytes   941 Mbits/sec                  
[  5]   6.00-7.00   sec   112 MBytes   942 Mbits/sec                  
[  5]   7.00-8.00   sec   112 MBytes   938 Mbits/sec                  
[  5]   8.00-9.00   sec   112 MBytes   942 Mbits/sec                  
[  5]   9.00-10.00  sec   112 MBytes   941 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec    0            sender
[  5]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  receiver

iperf Done.
root@foo:~# 

The plot thickens … It appears that only transfers from WAN are affected.
Internal traffic is behaving as expected, iperf3 is able to saturate the link to a different machine in the same local network.

Downloads from apt or pkg repos are extremely slow. speedtest-go can sometimes get to 200 Mbps, it’s usually hovering at 30 Mbps while consistently hitting 900+ Mbps on any other physical machine or incus container/VM that’s not running on IncusOS.

Ah, it could be some fun around TCP window sizing or something?
There are a variety of algorithms for that stuff, though why it would only affect that particular system is a bit odd.

I’ll replace IncusOS with Debian on this machine to eliminate any potential hardware issue and report back.

Fresh Debian 13 install on the same hardware with kernel, zfs and incus from Zabbly repositories.

On the host, I am able to saturate 1 Gbps fiber connection to the Internet.

In containers (ubuntu, debian) and FreeBSD VM that’s not the case. If I’m lucky, I get 200-230 Mbps down. Most of the time, it’s between 12 and 50 Mbps.

Containers and VM are connected to the LAN via unmanaged bridge.

Okay, so behavior is consistent at least.

I think it’s safe so say that this is caused by the Realtek r8169 driver.

I was able to find numerous reports about slow download speed. I’m even able to reproduce it on the host. If I start a download or fire up a speed test right after boot, download speed is between 20 and 30 Mbit/s for the first couple of tries. After that, I’m getting 900+ Mbit/s as expected.

When the host is struggling, VMs and containers are struggling too. When the host download speed settles, VMs and containers get about 200 Mbit/s.

There are even rx_missed packets. :man_facepalming:

# ethtool -S enp2s0
NIC statistics:
     tx_packets: 757122
     rx_packets: 1582990
     tx_errors: 0
     rx_errors: 0
     rx_missed: 7367
     align_errors: 0
     tx_single_collisions: 0
     tx_multi_collisions: 0
     unicast: 1582250
     broadcast: 172
     multicast: 568
     tx_aborted: 0
     tx_underrun: 0

There’s zero rx_missed packets on my desktop machine after 10 days of uptime. Realtek really is hot garbage.