I am on 5.14. I updated to latest/edge, but it looks like I might still be having problems:
root@workstation:~# lxc version
Client version: 5.14
Server version: 5.14
root@workstation:~# snap refresh lxd --channel latest/edge
This made the Host-> VM loopback almost as fast as the Host->Container loopback, but perhaps both slower than I would have expected:
HOST TO CONTAINER:
root@workstation:~# iperf3 -P 3 -c 10.4.1.2
Connecting to host 10.4.1.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.04 sec 29.6 GBytes 25.3 Gbits/sec receiver
HOST TO VM:
root@workstation:~# iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.04 sec 27.1 GBytes 23.2 Gbits/sec receiver
I tried from a Windows machine over a 40Gbit Ethernet link - the container was a little slower than the host, the VM was a lot slower:
40GbE WINDOWS TO HOST:
C:\Program Files\iperf3>iperf3 -P 3 -c 10.3.1.1
Connecting to host 10.3.1.1, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.00 sec 44.0 GBytes 37.8 Gbits/sec receiver
40Gbe WINDOWS TO CONTAINER:
C:\Program Files\iperf3>iperf3 -P 3 -c 10.4.1.2
Connecting to host 10.4.1.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.00 sec 35.8 GBytes 30.7 Gbits/sec receiver
40GbE TO VM:
C:\Program Files\iperf3>iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.00 sec 18.7 GBytes 16.1 Gbits/sec receiver
I tried from a Mac over 1GbE - Host ran at line speed, container macvlan was near line speed, but both container and VM bridges were terrible:
MAC 1GbE TO HOST:
bash-3.2# iperf3 -P 3 -c 10.0.0.12
Connecting to host 10.0.0.12, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 6.00-6.36 sec 43.5 MBytes 1.00 Gbits/sec
MAC 1GbE TO CONTAINER MACVLAN:
bash-3.2# iperf3 -P 3 -c 10.0.0.20
Connecting to host 10.0.0.20, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.00 sec 1.03 GBytes 889 Mbits/sec receiver
MAC 1GbE TO CONTAINER BRIDGE:
bash-3.2# iperf3 -P 3 -c 10.4.1.2
Connecting to host 10.4.1.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.03 sec 96.9 MBytes 81.1 Mbits/sec receiver
MAC 1GbE TO VM BRIDGE:
bash-3.2# iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.03 sec 102 MBytes 85.4 Mbits/sec receiver
I got even worse vm performance compared to host from the same Mac over a Thunderbolt bridge, and was not able to connect to or ping the container(???):
networksetup -setadditionalroutes "Thunderbolt Bridge" 10.4.0.0 255.255.0.0 10.5.0.1
MAC IPoT TO HOST:
bash-3.2# iperf3 -P 3 -c 10.5.0.1
Connecting to host 10.5.0.1, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.00 sec 8.85 GBytes 7.60 Gbits/sec receiver
MAC IPoT TO VM:
bash-3.2# iperf3 -P 3 -c 10.4.4.2
Connecting to host 10.4.4.2, port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 0.00-10.00 sec 221 KBytes 181 Kbits/sec receiver
Again, there was no appreciable load on CPU or memory on either the host of the VM.
MTU on the 40GbE link is 9000, and 1500 everywhere else.
Network adapters are:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp9s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 2c:76:8a:57:c8:90 brd ff:ff:ff:ff:ff:ff
3: enp9s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 2c:76:8a:57:c8:91 brd ff:ff:ff:ff:ff:ff
4: enp9s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 2c:76:8a:57:c8:92 brd ff:ff:ff:ff:ff:ff
5: enp9s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 2c:76:8a:57:c8:93 brd ff:ff:ff:ff:ff:ff
6: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 94:57:a5:f2:54:44 brd ff:ff:ff:ff:ff:ff
altname enp0s25
inet 10.0.0.12/24 brd 10.0.0.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::9657:a5ff:fef2:5444/64 scope link
valid_lft forever preferred_lft forever
7: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 24:be:05:89:6c:30 brd ff:ff:ff:ff:ff:ff
altname enp1s0
inet 10.3.1.1/16 brd 10.3.255.255 scope global ens4
valid_lft forever preferred_lft forever
inet6 fe80::26be:5ff:fe89:6c30/64 scope link
valid_lft forever preferred_lft forever
8: ens4d1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq state DOWN group default qlen 1000
link/ether 24:be:05:89:6c:31 brd ff:ff:ff:ff:ff:ff
altname enp1s0d1
9: ccom-net: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:64:3f:90 brd ff:ff:ff:ff:ff:ff
inet 10.4.4.1/24 scope global ccom-net
valid_lft forever preferred_lft forever
inet6 fd42:4d33:8762:6c77::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe64:3f90/64 scope link
valid_lft forever preferred_lft forever
10: samba-network: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:7c:23:d7 brd ff:ff:ff:ff:ff:ff
inet 10.4.1.1/24 scope global samba-network
valid_lft forever preferred_lft forever
inet6 fd42:f935:903f:b9dd::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe7c:23d7/64 scope link
valid_lft forever preferred_lft forever
12: vethba30a152@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master samba-network state UP group default qlen 1000
link/ether 42:5f:65:d3:b1:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
14: practice-nic: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 16:9a:9d:1c:b2:0a brd ff:ff:ff:ff:ff:ff
inet 10.4.3.1/24 brd 10.4.3.255 scope global practice-nic
valid_lft forever preferred_lft forever
inet6 fe80::149a:9dff:fe1c:b20a/64 scope link
valid_lft forever preferred_lft forever
16: thunderbolt0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 02:f4:92:18:0d:96 brd ff:ff:ff:ff:ff:ff
inet 10.5.0.1/24 brd 10.5.0.255 scope global thunderbolt0
valid_lft forever preferred_lft forever
inet6 fe80::f4:92ff:fe18:d96/64 scope link
valid_lft forever preferred_lft forever
17: tap45a7970b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ccom-net state UP group default qlen 1000
link/ether 9e:dc:59:7e:b1:74 brd ff:ff:ff:ff:ff:ff
Container config is:
architecture: x86_64
config:
boot.autostart: "false"
image.architecture: amd64
image.description: Debian bullseye amd64 (20221018_05:25)
image.os: Debian
image.release: bullseye
image.serial: "20221018_05:25"
image.type: squashfs
image.variant: default
volatile.base_image: d7874535267d24189305329233e74e54e2e9dd0ceedbffc87c623737a7ccf457
volatile.bridge-device.host_name: vethba30a152
volatile.bridge-device.hwaddr: 00:16:3e:b5:3f:01
volatile.cloud-init.instance-id: 0f4d5a16-965b-44bc-97d4-11a055b3f6c9
volatile.eth1.host_name: macf0610ae0
volatile.eth1.hwaddr: 00:16:3e:de:df:9e
volatile.eth1.last_state.created: "false"
volatile.eth1.name: eth1
volatile.last_state.power: RUNNING
volatile.uuid: afc10fb3-98cd-419d-8a3f-356e2a104e78
volatile.uuid.generation: afc10fb3-98cd-419d-8a3f-356e2a104e78
devices:
bridge-device:
name: eth0
network: samba-network
type: nic
eth1:
nictype: macvlan
parent: eno1
type: nic
ephemeral: false
profiles:
- nobridge
stateful: false
description: ""
VM config is:
architecture: x86_64
config:
image.architecture: amd64
image.description: Debian bullseye amd64 (20230429_05:24)
image.os: Debian
image.release: bullseye
image.serial: "20230429_05:24"
image.type: disk-kvm.img
image.variant: default
volatile.base_image: ec908128fd9b11ac9696af68dd8b506d943c6642981687020152bb1273ce1f9a
volatile.bridge-device.host_name: tap45a7970b
volatile.bridge-device.hwaddr: 00:16:3e:55:51:34
volatile.cloud-init.instance-id: 7ed88e68-6ff5-4fa5-809d-1f9d380f998e
volatile.last_state.power: RUNNING
volatile.uuid: 1eee4378-432d-4e09-8268-b93feba2e28a
volatile.uuid.generation: 1eee4378-432d-4e09-8268-b93feba2e28a
volatile.vsock_id: "8"
devices:
bridge-device:
name: eth0
network: ccom-net
type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""
NFTables are:
root@workstation:~# nft list tables
table inet filter
table inet lxd
root@workstation:~# nft list table inet filter
table inet filter {
chain input {
type filter hook input priority filter; policy accept;
}
chain forward {
type filter hook forward priority filter; policy accept;
ip daddr 10.0.0.1 accept
ip saddr 10.4.0.0/16 ip daddr 10.0.0.0/8 ct state { established, related } counter packets 1163661 bytes 48154610 accept
ip saddr 10.4.0.0/16 ip daddr 10.0.0.0/8 counter packets 1 bytes 40 drop
}
chain output {
type filter hook output priority filter; policy accept;
}
}
root@workstation:~# nft list table inet lxd
table inet lxd {
chain pstrt.ccom-net {
type nat hook postrouting priority srcnat; policy accept;
ip6 saddr fd42:4d33:8762:6c77::/64 ip6 daddr != fd42:4d33:8762:6c77::/64 masquerade
}
chain fwd.ccom-net {
type filter hook forward priority filter; policy accept;
ip6 version 6 oifname "ccom-net" accept
ip6 version 6 iifname "ccom-net" accept
}
chain in.ccom-net {
type filter hook input priority filter; policy accept;
iifname "ccom-net" tcp dport 53 accept
iifname "ccom-net" udp dport 53 accept
iifname "ccom-net" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
iifname "ccom-net" udp dport 547 accept
}
chain out.ccom-net {
type filter hook output priority filter; policy accept;
oifname "ccom-net" tcp sport 53 accept
oifname "ccom-net" udp sport 53 accept
oifname "ccom-net" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
oifname "ccom-net" udp sport 547 accept
}
chain pstrt.samba-network {
type nat hook postrouting priority srcnat; policy accept;
ip6 saddr fd42:f935:903f:b9dd::/64 ip6 daddr != fd42:f935:903f:b9dd::/64 masquerade
}
chain fwd.samba-network {
type filter hook forward priority filter; policy accept;
ip6 version 6 oifname "samba-network" accept
ip6 version 6 iifname "samba-network" accept
}
chain in.samba-network {
type filter hook input priority filter; policy accept;
iifname "samba-network" tcp dport 53 accept
iifname "samba-network" udp dport 53 accept
iifname "samba-network" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
iifname "samba-network" udp dport 547 accept
}
chain out.samba-network {
type filter hook output priority filter; policy accept;
oifname "samba-network" tcp sport 53 accept
oifname "samba-network" udp sport 53 accept
oifname "samba-network" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
oifname "samba-network" udp sport 547 accept
}
}
CPU is:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
stepping : 2
microcode : 0x49
cpu MHz : 1197.502
cache size : 30720 KB
physical id : 0
siblings : 24
core id : 0
cpu cores : 12
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 15
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts md_clear flush_l1d
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_stale_data
bogomips : 5187.92
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management: