Help setting up a bridge network adapter, separate physical nictype passthru

Howdy,

I would like to set up a Debian-based VM in Incus, install Incus within it, and use that instance as a virtual server running LXC containers. I have two ethernet ports on my machine and a wireless card, all of which I would like for the VM to ‘own.’ The machine runs 64-bit MX Linux 23.1 with KDE.
I am using this tutorial from @trevor, hoping to set up the ‘bridge network adapter’ as the parent of the physical eth0, but let the VM have the two other, actual devices. I expect to pass the other two into one of the nested LXC containers, running something like OpenWRT, as physical nictypes, as well.
I created a project, a storage volume, a profile, a network, and an instance; respectively, anvil, anvil, anvil, anvilbr0, and kixikur. Here is an excerpt from that profile:

...
  eth0:
    name: eth0
    nictype: bridged
    parent: anvilbr0
    type: nic
  eth1:
    name: eth1
    nictype: physical
    parent: eth1
    type: nic
  root:
    path: /
    pool: anvil
    type: disk
  wlan0:
    name: wlan0
    nictype: physical
    parent: wlan0
    type: nic
...

But, to me, the networks in the instance don’t look right.


In the instance, it looks like lo changes host interface every second or so, and where are the rest of the interfaces? From inside the instance:

root@kixikur:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:16:3e:aa:59:fc brd ff:ff:ff:ff:ff:ff
    inet 10.31.188.71/24 metric 1024 brd 10.31.188.255 scope global dynamic enp5s0
       valid_lft 3515sec preferred_lft 3515sec
    inet6 fd42:a2c:5926:cd1e:216:3eff:feaa:59fc/64 scope global mngtmpaddr noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:feaa:59fc/64 scope link 
       valid_lft forever preferred_lft forever
3: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3c:52:82:5d:13:44 brd ff:ff:ff:ff:ff:ff

From outside the instance:

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether a0:36:9f:43:ac:f3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.10/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
       valid_lft 3061sec preferred_lft 3061sec
    inet6 fe80::6f64:ffa3:5d7f:d1da/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
5: anvilbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:b1:4a:af brd ff:ff:ff:ff:ff:ff
    inet 10.31.188.1/24 scope global anvilbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:a2c:5926:cd1e::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:feb1:4aaf/64 scope link 
       valid_lft forever preferred_lft forever
6: incusbr-1001: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:ff:ed:0b brd ff:ff:ff:ff:ff:ff
    inet 10.148.13.1/24 scope global incusbr-1001
       valid_lft forever preferred_lft forever
    inet6 fd42:d3c5:22b0:6cd4::1/64 scope global 
       valid_lft forever preferred_lft forever
7: incusbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:1a:c0:0e brd ff:ff:ff:ff:ff:ff
    inet 10.58.164.1/24 scope global incusbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:acda:9d30:547::1/64 scope global 
       valid_lft forever preferred_lft forever
10: tapf3d391dc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master anvilbr0 state UP group default qlen 1000
    link/ether c2:77:60:c4:c1:0a brd ff:ff:ff:ff:ff:ff

If I understood the tutorial, then the instance should only have one interface, corresponding to the anvilbr0, but it has two enpXsX instead. And, the host still uses eth0, yet now has the tap interface. I’m not familiar with tap interfaces, or probably most networking concepts. Having read documentation and blogs, and watched presentations and tutorials, I thought this would make more sense to me than it does. I appreciate your help.

Best,

UpsetMOSFET

EDIT: I should also mention that apt update doesn’t work in the instance. It looks like that’s because it has no internet; every repository only loads to zero percent. Something about networks.

EDIT2: I did say, “something like OpenWRT.” I won’t be using OpenWRT, itself, as @stgraber indicated in this post that it would be bad for specifically what I want to do. I’m hoping DietPi with Pihole+Unbound will do the trick for me, maybe with Shorewall. Open to suggestions there, too.

Yeah, debugging with @mcondarelli has showed that OpenWRT does actively rename interfaces both on startup and shutdown, so that’s definitely a bit of a problem.

EDIT3: It occurs to me that this may require the user running Incus to have superuser permission, not just to be a member of incus-admin. What effect does this have?

I think this looks correct. With virtual machines, we don’t control interface naming the same way we do containers, so the names depend on what the OS does in the VM.

In this case, I suspect the content of the VM is:

  • enp5s0 (your eth0 device)
  • enp6s0 (your eth1 device)
  • MISSING (your wlan0 device)

The wifi device is missing because the default kernel doesn’t include the extra drivers for things like wireless. If the VM is Ubuntu, install the linux-generic package and reboot, that should have the wifi card show up.

On the host side, the tapfXYZ device is the host side of your eth0 device and will be part of the anvilbr0 bridge.

Maybe I should be more clear than “doesn’t look right,” especially for my untrained eye.

  • Based on the tutorial I linked, I expected that ip addr on the host would show parent anvilbr0 for eth0. At least anvilbr0 definitely gets an ip address.
  • I expected the name parameter in the Incus profile would apply in the container. It turns out the Debian convention for naming network interfaces changed at some point, and I have no idea what it could mean now. Interesting.
  • I figured the eth0-analogue in the container, enp5s0, would get an IP address from Incus’s DHCP server. Maybe it should be looking to the router instead, since it is not sharing the host’s IP. And yet, it has an IPv6 address? Do I need to do something to get internet access in the instance?
  • Based on the Linux Containers wiki page for nics, I expected that, with both eth1 and wlan0 as targeted devices for a physical passthrough: “The targeted device will vanish from the host and appear in the instance.” So, I expected there would be an interface for wlan0, even if the OS doesn’t have a driver to communicate through it. I guess that last part’s kinda dumb, but I still expected eth1 to be more obvious. I’m not convinced that it’s the host’s eth1.

Okay, so I looked up tap interfaces. I get they’re something ‘in userspace,’ for referring to an interface, but I don’t know why that would be useful, when the interface in question is still exposed. That’s probably an implementation detail, and beyond me.

Hmm, I don’t think your eth0 host interface is part of anvilbr0, if it was, eth0 itself wouldn’t have an IP address. It just looks like anvilbr0 is a normal managed bridge and eth0 is your host interface through which all traffic is routed.

The name interface is used as the initial interface name for interfaces in containers.
It’s effectively ignored for virtual-machines as unlike containers, Incus can’t dictacte the initial name of the interface.

In containers, it’s also possible for software running in the container to further rename the interface.

Your output above shows enp5s0 has 10.31.188.71 from DHCP, so it did get an IP address from the Incus DHCP server.

Based on the output you’ve shared, you are using a VM, not a container.

With VMs, all Incus can do is attach those network devices to the right virtual PCI or USB bus.
You then need to make sure that the guest OS has the right drivers in place.

Your enp6s0 device in your VM has the MAC address 3c:52:82:5d:13:44 , the 3c:52:82 prefix indicates an HP network card, that definitely suggests it’s your physical network interface.

That’s how virtual machines attach to bridges like your anvilbr0.

Aha! :sweat_smile: That definitely matters! I was calling a gander, a goose!

Okay, yeah. This page from Red Hat helped a ton. I used ip link set eth0 master anvilbr0. That, as it turns out, was literally all it took to pass eth0 into the VM for internet. I’ll still have to learn how to make it stick after a reboot.
I don’t understand what’s going on, on the inside, though. I can ping 9.9.9.9 with no lost packets but not startpage.com. I get errors with name resolution using apt-get, too. Does Incus also manage DNS on bridges, like anvilbr0? Do I need to change it?
I’m going through the process of turning Debian into Kicksecure, and the many apt commands seem really hit-or-miss with name resolution, even for debian.org repos. It’s made it so that it cannot even install the wireless drivers, to confirm that the wlan0 was passed in correctly.

I think you’re right. I think the wireless card is going in fine, too, because the host’s wireless connection is ‘deactivated’ every time the VM comes online, and is no longer able to access the internet.

EDIT: I finally find this in the docs: Enable the built-in DNS server

To make use of network zones, you must enable the built-in DNS server.
To do so, set the core.dns_address configuration option to a local address on the Incus server. To avoid conflicts with an existing DNS we suggest not using the port 53. This is the address on which the DNS server will listen. Note that in an Incus cluster, the address may be different on each cluster member.

I do want to make use of network zones, very certainly. I take this to mean, that I have to tell Incus to turn on the DNS server declaratively by telling the instance to look on the host for DNS. In my case, the host no longer has an IP address by the time the VM is querying the nameserver. “A local address on the Incus server” makes it sound, to me, like this isn’t built-in. Unless… Can my VM not communicate with the host? Isn’t that the reason for a bridge versus macvlan? :face_exhaling:

Okay, it progresses. Now, instead of ip link set eth0 master anvilbr0, I run brctl addif anvilbr0 eth0. From everything I can tell, it has the same effect of assigning anvilbr0 as the master of eth0 on the host.

Okay, then, inside the VM: I finally connected configuring the Incus server with enabling the built-in DNS server. I ran incus set core.dns_address 10.31.188.1/24, using the IP Incus assigned to anvilbr0. But, when I drop into the VM, I still get ping: startpage.com: Temporary failure in name resolution. Since this is bridged into the VM, I figured telling Incus to put the DNS server there would give the VM access. A little help?

For anyone following along, the solution was with my firewall. In the first post, I had disabled my firewall before tinkering with Incus, at all. Since then, I’d forgotten to disable when I wanted to work on Incus. Eventually, I will probably disable the firewall on the host altogether, but for now I added some ufw rules. Both methods, and others, are on the firewall page of the Incus wiki.

1 Like