Add support for "ipv4.gateway" = "none"

I want to create two nics, br0 and internal0 in my case, one should communicate with the external network for normal internet access. The internal0 should only exist on the incus host so network traffic doesn’t travel through the switch and stays on the host device. Also has the benefit of not needing to create dhcp entries.

I would like to see an option like ipv4.gateway = none to disable the default route generation to this nic

Or am i missing some kind of configuration?

No need, you can use ipv4.routing:

Thank you for the suggestion, sadly this still creates a default route for the wrong interface, its first come first route, what i was able to research
eth0 here is the LAN switch and the eth1 i created for internal usage

I’m using debian 13, it shows me 2 default router, but whatever, it doesn’t matter.

If you need to set eth0 as default, you need to set Metric for each NIC, the lower the higher priority it is, so eth0 is 10, eth1 is 100.

https://www.baeldung.com/linux/change-network-routing-metric

If using systemd-networkd:

eth0.network:

[Match]
Name=eth0

[Network]
DHCP=true

[DHCPv4]
UseDomains=true
UseMTU=true

[DHCP]
ClientIdentifier=mac

[Route]
Gateway=10.129.251.1
Metric=10

eth1.network:

[Match]
Name=eth1

[Network]
DHCP=true

[DHCPv4]
UseDomains=true
UseMTU=true

[DHCP]
ClientIdentifier=mac

[Route]
Metric=100

That would work but I have quite a few containers and don’t really want to micromanage them. Also editing containers manually feels unclean to me

OK, I’m out of method. I notice you eth0 CIDR is 12 and eth1 CIDR is 24, linux will prefer small CIDR which is 24, maybe change eth1 CIDR to 12 and change eth0 CIDR to 24 for testing?

Could you simply create internal0 as an unmanaged bridge? Then you get no DHCP service, no routing, and so on.

Or are you saying that you still want eth1 to get its IP address via DHCP, from the incus-managed DHCP service?

I want to achieve the following:

Two separate network paths:

  1. Container ↔ Incus ↔ Container (eth1/internal0): Internal container-to-container communication

  2. Container ↔ LAN ↔ Internet (eth0/br0): External connectivity via Mikrotik router

Why this design:

My Mikrotik RB5009UPr+S+IN doesn’t create DNS entries for DHCP leases by default. By using Incus’s managed network (internal0), containers can:

  • Resolve each other by hostname automatically (e.g., grafana → prometheus)

  • Communicate directly without traffic going through the physical switch

  • Get automatic service discovery via Incus’s built-in DNS

This keeps internal traffic efficient and gives me hostname-based service discovery, while still allowing internet access through the existing LAN infrastructure.

Just a thought of how i imagine my homelab server setup…

Let me check I understand what you’re asking for:

  1. An unmanaged bridge (eth0/br0) connected to the subnet behind the Mikrotik. Containers pick up their IP address, and their default gateway, directly from the Mikrotik DHCP server. This all works fine, except no auto DNS.
  2. A managed bridge (eth1/internal0). The incus DHCP server configures this, and updates the incus DNS server.
  3. Containers connect to both networks, and use the incus DNS server.

AFAICS, the only purpose of the proposed internal0 network is to get DNS names registered (since container-to-container traffic would work fine over the other bridge - it would stay internal to br0 and wouldn’t leave the host).

Now, to make this work you have several problems. Apart from the one you’ve already identified (you don’t want to pick up default gateway from internal0), you need the containers to use the incus DNS server to resolve each others’ names. That means that the Mikrotik DHCP server will have to give out the incus server’s IP address as the DNS server setting. And that means that everything else on your network, not just the incus containers, will use the incus DNS.

Or: you will get conflicting DNS settings from the two DHCP servers, similar to how you are getting conflicting default gateway settings. Therefore, I think this is the wrong approach for getting container names into DNS.

Now, it is possible to get the Mikrotik DHCP server to update DNS. There are script hooks which you may be able to use for this, or just periodically scan the leases.

Another option is to run your own local DNS server, authoritative for some domain like internal.mydomain.com, and forward DNS requests from the Mikrotik:

/ip dns static
add forward-to=192.168.0.53 regexp="\\.internal\\.example\\.com\$" type=FWD
add forward-to=192.168.0.53 regexp="\\.168\\.192\\.in-addr\\.arpa\$" type=FWD

Then you need to update it as containers come and go. I’m not aware of hooks in incus to trigger updates automatically. What I do is more or less the opposite: first I create an entry in Netbox for my container, which runs a webhook trigger to update the DNS. Then I create the container, using a shell script which fetches the desired IP from Netbox and launches the container, passing a cloud-init config which assigns the chosen IP address statically. I admit that’s more complexity than most people would like!

If you still want to use the incus DHCP/DNS, I can offer you another solution:

  • Create a single incus managed bridge (eth0/br0) with its own subnet. Assign it its own subnet, but disable NAT.
  • Add a static route on your Mikrotik which routes the bridge’s subnet via your incus host.
  • That’s it. Create containers with a single eth0 connection on br0.

Since br0 is a bridge, it will be used for internal container-to-container communication (this traffic will not leave the incus host). And the containers will see the local DNS as you require.

The problem then you’ll get is if you communicate between some other device behind the Mikrotik (e.g. a laptop) and a container. If the incus host is on the same subnet as the laptop, then the flow is asymmetric: it goes laptop → Mikrotik → incus host → container in one direction (following the laptop’s default gateway), but container → incus host → laptop in the other direction. This breaks connection tracking on the Mikrotik: what you’ll see is connections work for about 20 seconds, and then break. I’ve been there :slight_smile:

This can be fixed though: make a separate subnet from the Mikrotik for the incus host uplink - either a separate VLAN, or a separate physical connection. That is:

                 ^
                 |
              Mikrotik
              |.1   |.1
192.168.0     |     |     192.168.1
--------+-----+     +------+-------
        |                  |.2
     laptop             incus
                        br0|.1
                           |  192.168.2
                          -+-+-+-+--
                             | | |
                           containers

/ip route
add disabled=no dst-address=192.168.2.0/24 gateway=192.168.1.2

This is actually a pretty decent way to run. All laptop<->container traffic is routed via the Mikrotik. However, your laptop won’t see the container names in its DNS.

What if you want multiple incus servers? They can each have their own IP pool and their own uplink on the 192.168.1 network and static route. The problem is that if you move a container from one to another, its IP address will change; and you also won’t have all the names registered in a single DNS view.

An incus cluster with OVN would probably solve that, but that’s a large hammer for a small nut. I don’t use incus clustering myself, as it introduces failure modes that standalone incus servers don’t have.

Another suggestion is a small hook inside your containers, which updates dynamic DNS. You can use cloud-init to apply this to all containers you create.

And yet another possibility is to run a standalone DHCP server behind the Mikrotik (i.e. turn off the MT DHCP) which can do dynamic DNS updates.

Anyway, I hope there are a few useful ideas there.

Thank you for the suggestion. I was already aware of the DHCP-to-DNS scripts for MikroTik RouterOS. I followed this guide, and the solution works perfectly.

I had been hesitant to implement it initially because I thought it felt “unclean,” but after exploring the alternatives, this approach is actually the easiest and cleanest solution for my use case.

I’ve now removed the internal0 network since br0 already handles internal container-to-container traffic locally without going through the switch, making the second network unnecessary.

1 Like