However, when I run incus network ls, it shows that the newly created bridge is not managed by Incus. I also noticed that the interface for this bridge needs to be brought up manually each time.
Do I need to create an Incus-managed bridge instead to avoid any issues? What is considered best practice when bridging a LAN network for Incus containers/VMs?
It’s impossible to create an incus managed bridge network that can access LAN dhcp server. Because all incus managed bridges are managed by incus managed dnsmasq.
How? Please post your config.
No need, you already have an incusbr0, I guess.
Just use a linux/ovs bridge will be ok. If you feel brave you cn try dpdk+ovs. Or you just use macvlan.
Incus in general can manage private networks, using NAT networking. Those instances (containers and VMs) will be able to have access to your LAN and the Internet. When I say access, it means that you can make network connections from your instances to the system on your LAN and the Internet. However, your instances will not be able to get an IP address from your LAN.
If you want your containers to get an IP address from your LAN with _bridged networking_, then you need to create a suitable network interface externally, that is, on your host.
Therefore, as @catfish wrote, Incus cannot manage a network interface that will be used for bridged networking (i.e. the containers get an IP address from the LAN).
As you may have noticed in my blog post, it’s somewhat tricky how to manage the networking of the host so that you create a proper bridge network interface. Depending on what Linux distribution (actually, what network stack) is running on the host, you would need a corresponding tutorial. Various cases include NetworkManager, netplan, ifupdown, etc.
This is a problem. NAT and DNSMASQ should not be enforced as a default. Neither Proxmox nor ESXI does this by default. Network engineers won’t do this by default. It is a problem.
For me, the biggest pain point of Incus has been setting up the bridge on the host, and configuring the system containers to use the bridge and get their IP address from the LAN DHCP.
There must be a way for Incus to accommodate both a server/edge-centric approach to network connectivity AND a network/fabric-centric approach to be able to easily build out the virtual infrastructure as one would build out the wider network Infrastructure.
Defaults like NAT-DNSMASQ are very server/edge-centric and network engineers need to be able to build out from their much broader understanding of network Infrastructure.
This means clearly documented support of the two paradigms aligned with CLI and other implementation choices in the design.
Ideally the Incus functional UI, CLI and documentation will support the two distinct user bases who operate from quite different paradigms.
I’d argue that server guys who are somewhat trained in comms find the network easier to understand from the network perspective. The technologies and tooling in the network is very mature and very well documented.
This is for sure possible it all comes back to configure the right networks in Incus itself.
On default Incus will create an internal bridge incusbr0 where all traffic will use NAT-DNSMASQ. This is fine for most users. There are a few other options to allow using the host network like creating a second bridge like incus network create enp5s0 parent=enp5s0 --type=physical It creates a second managed bridge which allows to share the network with the host. Alternative you can use OVN, MACVLAN, etc.
All in all Incus is flexible enough to support the needs for all kind of different network topology requirements. Just chose the one for your needs.
There must be a way for Incus to accommodate both a server/edge-centric approach to network connectivity AND a network/fabric-centric approach to be able to easily build out the virtual infrastructure as one would build out the wider network Infrastructure.
We also found this a bit unsuitable for our purposes and got a little … creative.
Intent: Having secure instances that are publicly reachable, but run on isolated networks. Instances may have a single IPv4 and/or IPv6 or multiple of them assigned.
As initial tests with Macvlan were underwhelming we chose OVS and tagged VLANs instead. The OS controls the uplink br0, Incus manages vlan-br0 and incusbr0. Our public instances use vlan-br0 and get static IPv4 and/or IPv6 assignments as needed. Private instances use incusbr0. That’s all stuff Incus handles natively up to that point and does it exceptionally well.
But to really make instances on vlan-br0 available to the outside world we then run a self-written daemon (incus-arp) we rigged up ourselves. It periodically polls the Incus API, fetches what instances are running and what their network settings and VLANs are. It then creates and maintains the required Firewalld rules, arp table, neigh entries and routes that are needed to make these instances publicly available. It also manages connectivity of publicly exposed OCI containers and hands them their network settings and DNS config via nsenter.
If an instance is stopped or its network settings change? Then these rules, routes and neigh entries are either automatically adapted or torn down to reflect the changed realities.
As we use arp/neigh we can freely assign individual IPs from wildly different ranges to instances on the same Incus node and don’t have to assign entire network blocks of public IPs. It just takes a moment and the network propagates what runs where and how it can be reached. This also avoids using BGP and eliminates the need to have direct control over the upstream router. Because that’s not always an option unless you own the infrastructure.
Is it unorthodox? It sure is. But this works well for our situation (multiple unclustered single Incus instances) and we’ve been using this in production for a year and a half in various locations.
NAT, DNSMASQ, MACVLAN are messy because their implementation is very automagical.
Unfortunately Incus leans heavily into this and it is impossible to build out infrastructure in a way that network engineers build the rest of the infrastructure — deliberately, from the ground up with the ability to explicitly configure and fit together the nuts and bolts.
Linux is, in no small way, partially responsible for this unfortunate state of affairs but that doesn’t let Incus off the hook.
Incus documentation, while well written and presented, is not written for network engineers. It lacks the taxonomy, structure and detail typically found in network documentation.
I believe that if the documentation were to be presented for network engineers, with inline references to provide translations into the various terms used on servers, the server guys would also find it easier to understand and build their infrastructure.
It would become significantly easier to build switches (bridges), configure interfaces, vlans and trunks, port aggregation (bundles) and many other network technologies.
I hope this doesn’t sound too critical or inflexible — but maybe it is and maybe my network bias is too strong making me unable to adapt to Incus’ and the server way of doing things. To understand and implement. Keen to hear from network engineers.
I am interested in learning more about networking topics. I have been picking it up organically over the last many years. Do you have any pointers for people learning the topic that come from the server side and, in my specific case, from cloud computing?
I only just learned about this book and its author but it seems to have a great approach which should be a great introduction that covers all the important topics and especially provides perspectives:
Ivan Pepelnjak is excellent but even network engineers who aren’t experienced CCNP or higher can find themselves swimming in deep water. But Ivan can bring perspective to all levels: