Nictype routed not working cannot reach local lan

Hi,

on a clean install of gentoo i have installed only incus and a container “hmm5”.
The host is in the 192.168.9.81/24 network via a bonded network card behind a
natting router at 192.168.9.254. Pretty standard everything.

I wanted the container at 192.168.9.155 being part of the LAN with the “routed”
nictype. I dont like MACVLAN or IPVLAN because i like to have central firewall
on the host for all containers. Coming from Virtuozzo/Openvz this is the most
similar setup to their “venet” layer3 bridge, which is not available in incus.

This is the setup configuration:

# uname -a
Linux mask.freakout.de 6.12.16-gentoo #11 SMP PREEMPT_DYNAMIC
 x86_64 Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz GenuineIntel GNU/Linux

# incus version
Client version: 6.0.3
Server version: 6.0.3

# incus config show
config:
  images.auto_update_interval: "0"

# incus network list
+---------+----------+---------+------------------+------+-------------+---------+---------+
|  NAME   |   TYPE   | MANAGED |       IPV4       | IPV6 | DESCRIPTION | USED BY |  STATE  |
+---------+----------+---------+------------------+------+-------------+---------+---------+
| bond0   | bond     | NO      | 192.168.9.81/24  |      |             | 2       |         |
+---------+----------+---------+------------------+------+-------------+---------+---------+
| lo      | loopback | NO      |                  |      |             | 0       |         |
+---------+----------+---------+------------------+------+-------------+---------+---------+
| lxdbr0  | bridge   | YES     | 192.168.181.1/24 | none |             | 2       | CREATED |
+---------+----------+---------+------------------+------+-------------+---------+---------+

# incus network show lxdbr0
config:
  ipv4.address: 192.168.181.1/24
  ipv4.nat: "false"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/hmm4
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
project: default

# incus storage show masksp
config:
  source: vga/lxd
  volatile.initial_source: vga/lxd
  zfs.pool_name: vga/lxd
description: ""
name: masksp
driver: zfs
used_by:
- /1.0/images/0e5837afd2b69b188594af0b8f9787c2f02fe5000fdc5051c98b36438c93ab8f
- /1.0/images/6fa9b59aec5b6c1468369e4f4ba4768d5078da6e630555c3335b23c4a785405a
- /1.0/instances/hmm4
- /1.0/instances/hmm5
- /1.0/profiles/default
- /1.0/profiles/routed
status: Created
locations:
- none

# incus list
+------+---------+----------------------+------+-----------------+-----------+
| NAME |  STATE  |         IPV4         | IPV6 |      TYPE       | SNAPSHOTS |
+------+---------+----------------------+------+-----------------+-----------+
| hmm5 | RUNNING | 192.168.9.155 (eth0) |      | CONTAINER       | 0         |
+------+---------+----------------------+------+-----------------+-----------+

# incus config show hmm5
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Gentoo current amd64 (20250313_05:19)
  image.os: Gentoo
  image.release: current
  image.requirements.secureboot: "false"
  image.serial: "20250313_05:19"
  image.type: squashfs
  image.variant: openrc
  volatile.base_image: 6fa9b59aec5b6c1468369e4f4ba4768d5078da6e630555c3335b23c4a785405a
  volatile.cloud-init.instance-id: bfab1b4c-faac-4d7a-929c-4513fddcc5ac
  volatile.eth0.host_name: vethc1fe76ad
  volatile.eth0.hwaddr: 00:16:3e:f0:62:47
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: bd0eb551-943e-4206-be73-0dbff55abdbd
  volatile.uuid.generation: bd0eb551-943e-4206-be73-0dbff55abdbd
devices: {}
ephemeral: false
profiles:
- routed
stateful: false
description: ""

# incus profile show routed
config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 192.168.9.155/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: routed profile
devices:
  eth0:
    ipv4.address: 192.168.9.155
    nictype: routed
    parent: bond0
    type: nic
  root:
    path: /
    pool: masksp
    type: disk
name: routed
used_by:
- /1.0/instances/hmm5
project: default

# incus info hmm5
Name: hmm5
Status: RUNNING
Type: container
Architecture: x86_64

Resources:
  Processes: 3
  Disk usage:
    root: 1.08MiB
  CPU usage:
    CPU usage (in seconds): 2
  Network usage:
    eth0:
      Type: broadcast
      State: UP
      Host interface: vethc1fe76ad
      MAC address: 00:16:3e:f0:62:47
      MTU: 1500
      Bytes received: 2.90kB
      Bytes sent: 322.92kB
      Packets received: 38
      Packets sent: 1018
      IP addresses:
        inet:  192.168.9.155/32 (global)
        inet:  169.254.213.47/16 (link)

My problem is that “hmm5” cannot reach any host in the 192.168.9/24 network.
In the LAN is also a host at 192.168.9.82 reachable for testing:

# ping -c1 -W2 192.168.9.82
PING 192.168.9.82 (192.168.9.82) 56(84) bytes of data.
64 bytes from 192.168.9.82: icmp_seq=1 ttl=64 time=0.188 ms

After starting “hmm5” i see the following network info on host and container:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
2: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 192.168.9.81/24 brd 192.168.9.255 scope global bond0
9: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    inet 192.168.181.1/24 scope global lxdbr0
11: vethc1fe76ad@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 169.254.0.1/32 scope global vethc1fe76ad

# arp -a
? (192.168.9.155) at <incomplete> on bond0
? (192.168.9.82) at 90:1b:0e:37:42:01 [ether] on bond0
? (192.168.9.155) at 00:16:3e:f0:62:47 [ether] on vethc1fe76ad
? (192.168.9.155) at <from_interface> PERM PUB on bond0

hmm5 ~ # ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.9.155  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::216:3eff:fef0:6247  prefixlen 64  scopeid 0x20<link>
        ether 00:16:3e:f0:62:47  txqueuelen 1000  (Ethernet)
        RX packets 38  bytes 2896 (2.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2350  bytes 778460 (760.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 47  bytes 4512 (4.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 47  bytes 4512 (4.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

hmm5 ~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 192.168.9.155/32 scope global eth0
    inet 169.254.213.47/16 brd 169.254.255.255 scope global noprefixroute eth0

hmm5 ~ # ip r
default dev eth0 scope link src 169.254.213.47 metric 1000010
                                ^^^^^^^^^^^^^^ WRONG?
169.254.0.0/16 dev eth0 scope link src 169.254.213.47 metric 10
169.254.0.1 dev eth0 scope link

hmm5 ~ # ping -c1 -W2 169.254.0.1
PING 169.254.0.1 (169.254.0.1) 56(84) bytes of data.
64 bytes from 169.254.0.1: icmp_seq=1 ttl=64 time=0.063 ms

hmm5 ~ # ping -c1 -W2 192.168.9.81
PING 192.168.9.81 (192.168.9.81) 56(84) bytes of data.
1 packets transmitted, 0 received, 100% packet loss, time 0ms

After changing the default routes source address to the containers ip i can
reach the host but no host in lan or the router. The proxy-arp entry on bond0
was setup correctly by incus, but it doesn’t work in any direction:

hmm5 ~ # ip r del default dev eth0 scope link src 169.254.213.47 metric 1000010
hmm5 ~ # ip r add default dev eth0 scope link src 192.168.9.155  metric 1000010

hmm5 ~ # ping -c1 -W2 192.168.9.81
PING 192.168.9.81 (192.168.9.81) 56(84) bytes of data.
64 bytes from 192.168.9.81: icmp_seq=1 ttl=64 time=0.065 ms

hmm5 ~ # ping -c1 -W2 192.168.9.82
PING 192.168.9.82 (192.168.9.82) 56(84) bytes of data.
1 packets transmitted, 0 received, 100% packet loss, time 2058ms

Please help - thanks
Axel

nic type routed is not designed for what you are trying to achieve. Its only purpose is to create a layer3 routing between host and instance.
If you want your instance to access external lan network using routed nictype then you must also create a routing on your host and on your external router.

i have setup manually a working network configuration which
combines bridged, routed, macvlan and ipvlan networks. It has
the features of these network types and avoids their specific
disadvantages and creates a layer3 routing from the instance
to the host and the internet no matter if the host is directly
connected or through an external router with or without nat.

The setup needs no firewall rules or additional bridges or
virtual interfaces - it uses the standard bridge incusbr0.
Since it is a routed connection the host can fully control
the traffic to the instance by simple firewall FORWARD rules.

On the instance the network looks pretty simple and the setup
needs three commands on the instance and two commands on the
host to configure:

INSTANCE:
ip address add 192.168.9.155/32 broadcast 192.168.9.155 dev eth0
ip -4 route add default via 192.168.181.1 dev eth0 proto static onlink
echo "nameserver 192.168.4.200" >/etc/resolv.conf

hmm5 ~ # ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.9.155  netmask 255.255.255.255  broadcast 192.168.9.155
        inet6 fe80::216:3eff:fec8:3249  prefixlen 64  scopeid 0x20<link>
        ether 00:16:3e:c8:32:49  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>

hmm5 ~ # ip route
default via 192.168.181.1 dev eth0 proto static onlink
HOST:
ip -4 route add 192.168.9.155/32 via 192.168.9.155 dev lxdbr0 onlink
ip neighbour add proxy 192.168.9.155 dev bond0 nud permanent

bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 192.168.9.81  netmask 255.255.255.0  broadcast 192.168.9.255
        inet6 fe80::921b:eff:fe34:a705  prefixlen 64  scopeid 0x20<link>
        ether 90:1b:0e:34:a7:05  txqueuelen 1000  (Ethernet)

enp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.4.241  netmask 255.255.255.0  broadcast 192.168.4.255
        inet6 2003:a:112c:9200:921b:eff:fe30:e968  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::921b:eff:fe30:e968  prefixlen 64  scopeid 0x20<link>
        inet6 fde2:8acd:e9d3:0:921b:eff:fe30:e968  prefixlen 64  scopeid 0x0<global>
        ether 90:1b:0e:30:e9:68  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>

incusbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.181.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:16:3e:c0:d0:68  txqueuelen 1000  (Ethernet)

veth17721407: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 0a:95:c4:e5:87:54  txqueuelen 1000  (Ethernet)

mask ~/adm # ip route
default via 192.168.4.200 dev enp5s0 src 192.168.4.241 metric 4
192.168.4.0/24 dev enp5s0 proto dhcp scope link src 192.168.4.241 metric 4
192.168.9.0/24 dev bond0 proto kernel scope link src 192.168.9.81
192.168.181.0/24 dev incusbr0 proto kernel scope link src 192.168.181.1
192.168.9.155 via 192.168.9.155 dev incusbr0 onlink

mask ~/adm # arp -a
? (192.168.4.96) at 90:1b:0e:08:fb:ec [ether] on enp5s0
? (192.168.4.124) at 00:30:48:92:04:70 [ether] on enp5s0
? (192.168.9.84) at 4c:72:b9:e6:57:b4 [ether] on bond0
? (192.168.4.89) at 90:1b:0e:0e:fe:b3 [ether] on enp5s0
? (192.168.9.155) at 00:16:3e:c8:32:49 [ether] on lxdbr0
digitalisierungsbox (192.168.4.200) at 00:09:4f:bf:b8:ba [ether] on enp5s0
? (192.168.9.155) at <from_interface> PERM PUB on bond0

INCUS CONFIG INSTANCE:
...
config:
  volatile.eth0.host_name: veth17721407
  volatile.eth0.hwaddr: 00:16:3e:c8:32:49
profiles:
- default

INCUS PROFILE DEFAULT (STANDARD):
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: masksp
    type: disk
name: default
used_by:
- /1.0/instances/hmm5

NETWORK:
mask ~/adm # incus network show incusbr0
config:
  ipv4.address: 192.168.181.1/24
  ipv4.nat: "false"
  ipv6.address: none
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/instances/hmm5
- /1.0/profiles/default
- /1.0/profiles/routed
managed: true
status: Created
locations:
- none
project: default

My question is about the possibility of setup this configuration
with a profile and/or config without the needs of manually scripts
on the instance and host. I tried different such profiles and
config without success. Please help.

Another possible option is to bind the routed network interface to bond0 as parent. Your device config should look like:

  eth0:
    name: eth0
    nictype: routed
    parent: bond0
    type: nic

That will result into direct access to the lan, host, etc. Adding ipv4.address allows fixing the IP for the instance.

but this results in creation of the routed veth-pairs for each instance. For simplification i want to use incusbr0 instead but with the onlink routes.