Incus VM with e1000e Network Driver

I have a requirement to replicate physical hardware in Incus VM, I need to make it work with e1000e network driver.

Tried the following

raw.qemu: '-netdev tap,id=netdev0 -device e1000e,netdev=netdev0'

which throws the following error

qemu-system-x86_64: -netdev tap,id=netdev0: could not configure /dev/net/tun: Operation not permitted

Other configuration which I tried

raw.qemu: '-netdev bridge,id=netdev0 -device e1000e,netdev=netdev0'

throws the following error:

qemu-system-x86_64: -netdev bridge,id=netdev0: bridge helper failed

I was able to make it work by manually creating a tap interface and hardcode the name to the VM configuration but looking for a better option here. Any Ideas ?

According to docs, if you want to override something in the generated /run/incus/<instance>/qemu.conf then you’d use the key raw.qemu.conf

However, it’s not immediately obvious to me how the NICs are configured. I’m running a VM with two NICs, which works fine, but the NICs aren’t defined in qemu.conf, nor are they seen on the qemu command line.

Inside the VM, dmesg shows the NICs are presented as virtio10 and virtio11 before being renamed, and lspci shows me:

05:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
06:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)

So I’m wondering if they are hot-plugged?

I found the following in internal/server/instance/drivers/driver_qemu.go:

                        // Attach NIC to running instance.
                        if len(runConf.NetworkInterface) > 0 {
                                err = d.deviceAttachNIC(dev.Name(), configCopy, runConf.NetworkInterface)
                                if err != nil {
                                        return nil, err
                                }
                        }
                d.logger.Debug("Using PCI bus device to hotplug NIC into", logger.Ctx{"device": deviceName, "port": pciDeviceName})
                qemuDev["bus"] = pciDeviceName
                qemuDev["addr"] = "00.0"
                        if slices.Contains([]string{"pcie", "pci"}, busName) {
                                qemuDev["driver"] = "virtio-net-pci"
                        } else if busName == "ccw" {
                                qemuDev["driver"] = "virtio-net-ccw"
                        }

And this ultimately calls m.AddNIC() in internal/server/instance/drivers/qmp/commands.go, which sends a netdev_add message.

So, without fully understanding the code here, it seems to me that incus-created NICs are dynamically added, and the use of virtio-net for network devices is hard-coded. (qemuDev is passed through from deviceAttachNIC() to addNetDevConfig(), but it looks like qemuDev["driver"] is forced in the code above).

Aha: Instance options - Incus documentation

While raw.qemu and raw.qemu.conf can be used to alter the arguments and configuration file that’s passed to QEMU, a lot of devices are now added through QMP instead.

This is used by Incus for any device which may need to be re-configured at runtime, effectively anything that can be hot-plugged.

Those devices cannot be overridden through the configuration or the command line, but instead additional configuration keys are available to run QMP commands directly.

It then goes on to describe the hooks, including netdev_add. Scriptlet documentation is under instance placement and there is a corresponding video.

If you get this working, please post back a howto here!

Thanks Brian, for the detailed info. Creating a scriplet was my last resort honestly it requires more time and effort.

Finally I got it work, AppArmor was denying access to the required binaries and files.

  raw.apparmor: |
    /usr/lib/qemu/qemu-bridge-helper mrix,
    /etc/qemu/bridge.conf r,
    capability net_admin,
  raw.qemu.conf: |
    [device "eth0"]
    id = "eth0"
    driver = "e1000e"
    bus = "pcie.0"
    addr = "2.4"
    netdev = "netdev0"

    [netdev "netdev0"]
    id = "netdev0"
    type = "bridge"
    br = "virbr0"
    helper = "/usr/lib/qemu/qemu-bridge-helper"

OR

(Prefer this one, It will use existing PCIe)

raw.qemu: -netdev bridge,br=virbr0,id=netdev0,helper="/usr/lib/qemu/qemu-bridge-helper" -device e1000e,netdev=netdev0

dmesg of the VM

root@test:~# dmesg | grep e1000e
[    0.853569] e1000e: Intel(R) PRO/1000 Network Driver
[    0.853571] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    0.853946] e1000e 0000:00:02.4: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[    0.905990] e1000e 0000:00:02.4 0000:00:02.4 (uninitialized): registered PHC clock
[    0.957190] e1000e 0000:00:02.4 eth0: (PCI Express:2.5GT/s:Width x1) 52:54:00:12:34:56
[    0.957193] e1000e 0000:00:02.4 eth0: Intel(R) PRO/1000 Network Connection
[    0.957215] e1000e 0000:00:02.4 eth0: MAC: 3, PHY: 8, PBA No: 000000-000
[    0.957675] e1000e 0000:00:02.5: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[    1.002176] e1000e 0000:00:02.5 0000:00:02.5 (uninitialized): registered PHC clock
[    1.051890] e1000e 0000:00:02.5 eth1: (PCI Express:2.5GT/s:Width x1) 52:54:00:12:34:57
[    1.051893] e1000e 0000:00:02.5 eth1: Intel(R) PRO/1000 Network Connection
[    1.051915] e1000e 0000:00:02.5 eth1: MAC: 3, PHY: 8, PBA No: 000000-000
[    1.215006] e1000e 0000:00:02.5 enp0s2f5: renamed from eth1
[    1.216388] e1000e 0000:00:02.4 enp0s2f4: renamed from eth0
[    2.198569] e1000e 0000:00:02.4 enp0s2f4: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

TAP interfaces on the host systems:

# ip link | grep tap
31: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UNKNOWN mode DEFAULT group default qlen 1000
32: tap1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr1 state UNKNOWN mode DEFAULT group default qlen 1000

Now i just need to figure out the naming convention of the Ethernet. Our legacy system applications work only with eth1 and eth2 :face_with_head_bandage:

If you just want to alter the existing devices rather than define new ones from scratch, you can use the raw.qemu.scriptlet option to perform boot time re-configuration of those devices.

  raw.qemu.scriptlet: |
    def qemu_hook(instance, stage):
      if stage != "pre-start":
        return

      # Convert ethernet
      eth0_bus = run_qmp({"execute": "qom-get", "arguments": {"path": "/machine/peripheral/dev-incus_eth0", "property": "parent_bus"}})["return"].split("/")[-1]
      eth0_addr = run_qmp({"execute": "qom-get", "arguments": {"path": "/machine/peripheral/dev-incus_eth0", "property": "addr"}})["return"]
      eth0_mac = run_qmp({"execute": "qom-get", "arguments": {"path": "/machine/peripheral/dev-incus_eth0", "property": "mac"}})["return"]
      eth0_netdev = run_qmp({"execute": "qom-get", "arguments": {"path": "/machine/peripheral/dev-incus_eth0", "property": "netdev"}})["return"]
      run_qmp({"execute": "device_del", "arguments": {"id": "dev-incus_eth0"}})
      run_qmp({"execute": "system_reset"})
      run_qmp({"execute": "device_add", "arguments": {"id": "dev-incus_eth0", "driver": "e1000e", "bus": eth0_bus, "addr": eth0_addr, "netdev": eth0_netdev, "mac": eth0_mac}})

This effectively captures all the useful properties from the Incus-generated device, then removes it, restarts the emulator and adds the device back with the different driver.

If the VM uses netplan, in principle you ought to be able to use set-name: to pick a different name, but it looks like there may some problems with this: Bug #1802004 “netplan won't apply config on renamed interface” : Bugs : Netplan

At worst you can set net.ifnames=0 on the kernel command line in grub config, and then it will stick to the old eth0/eth1 names. If you want eth1/eth2 then just define an additional NIC that you won’t use.