Here’s the translation of your message:
Hello,
I want to run DPDK in an LXC container, but unfortunately, it doesn’t detect the physical DPDK interfaces.
I have an Intel ICE network card, and I want it to be detected in the LXC container so I can run testpmd
.
I was able to do this in a Docker container, but not in LXC.
The DPDK driver I am using is vfio-pci
.
Has anyone managed to set this up and can help?
The error I get when running dpdk-testpmd
in LXC:
root@dpdk-c3-1:~# dpdk-testpmd
EAL: Detected CPU lcores: 44
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available 1048576 kB hugepages reported
EAL: Cannot open VFIO container /dev/vfio/vfio, error 2 (No such file or directory)
EAL: VFIO support could not be initialized
EAL: Requested device 0000:82:00.1 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mb_pool_0>: n=491456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=491456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=0
Press enter to exit
On the host, it works correctly:
root@jammy:~# dpdk-testpmd
EAL: Detected CPU lcores: 44
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_ice (8086:1592) device: 0000:82:00.1 (socket 1)
ice_load_pkg_type(): Active package is: 1.3.36.0, ICE OS Default Package (single VLAN mode)
TELEMETRY: No legacy callbacks, legacy socket not created
testpmd: create a new mbuf pool <mb_pool_0>: n=491456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=491456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 1)
ice_set_rx_function(): Using AVX2 Vector Rx (port 0).
Port 0: B4:96:91:91:7B:D9
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=1024 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
Press enter to exit
After running the command lxc config device add dpdk-c3-1 dev-vfio disk source=/dev/vfio path=/dev/vfio
,
the VFIO error changes to the following:
EAL: Cannot open VFIO container /dev/vfio/vfio, error 1 (Operation not permitted)
The settings in /etc/lxc/default.conf
are as follows:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
lxc.privileged = 1
lxc.apparmor.profile = unconfined
lxc.cgroup.devices.allow = c 10:196 rwm
lxc.cgroup.devices.allow = c 226:* rwm
# Mount /dev/vfio
lxc.mount.entry = /dev/vfio dev/vfio none bind,optional,create=dir 0 0
# Mount /sys
lxc.mount.entry = /sys sys none rw,bind 0 0
lxc.mount.entry = /sys/bus/pci/drivers/ice /sys/bus/pci/drivers/ice none bind,optional,create=dir