I’ve got a problem bringing the second interface of a container online.
The host has 2 nic’s:
2: enp32s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d4:85:64:38:df:ef brd ff:ff:ff:ff:ff:ff
inet xx.xx.48.66/24 brd xx.xx.48.255 scope global enp32s0
3: enp34s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d4:85:64:38:df:f0 brd ff:ff:ff:ff:ff:ff
inet 192.168.89.66/24 brd 192.168.89.255 scope global enp34s0
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 xx.xx.48.1 0.0.0.0 UG 0 0 0 enp32s0
xx.xx.48.0 0.0.0.0 255.255.255.0 U 0 0 0 enp32s0
xx.xx.48.67 0.0.0.0 255.255.255.255 UH 0 0 0 vethf14ae14b
192.168.0.0 192.168.89.44 255.255.255.0 UG 0 0 0 enp34s0
192.168.89.0 0.0.0.0 255.255.255.0 U 0 0 0 enp34s0
192.168.89.67 0.0.0.0 255.255.255.255 UH 0 0 0 vethdd65dfc7
container with this setup:
volatile.eth1.host_name: vethf14ae14b
volatile.eth1.hwaddr: 00:16:3e:01:3d:5d
volatile.eth1.name: eth1
volatile.eth2.host_name: vethdd65dfc7
volatile.eth2.hwaddr: 00:16:3e:58:ec:8b
volatile.eth2.name: eth2
devices:
eth1:
ipv4.address: xx.xx.48.67
nictype: routed
parent: enp32s0
type: nic
eth2:
ipv4.address: 192.168.89.67
ipv4.gateway: none
ipv4.host_address: 169.254.0.2
ipv6.gateway: none
nictype: routed
parent: enp34s0
type: nic
Resulting in this at the container (looks OK)
34: eth1@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:01:3d:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet xx.xx.48.67/32 scope global eth1
36: eth2@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:58:ec:8b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.89.67/32 scope global eth2
However the 192.168-interface is not reachable, only the container can ping the 192.168.-address on the host. (the host can’t ping the container)
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 169.254.0.1 0.0.0.0 UG 0 0 0 eth1
169.254.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth1
On the host there are 2 extra NIC’s after starting the container (looks OK)
35: vethf14ae14b@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 62:94:27:99:13:75 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.1/32 scope global vethf14ae14b
37: vethdd65dfc7@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 7e:5a:c4:c4:0a:41 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.2/32 scope global vethdd65dfc7
On the host the default profile does not have any network-setting.
name: default
description: Default Incus profile
devices:
root:
path: /
pool: default
type: disk
config: {}
project: default
There is however this in the nft-rules:
# nft list ruleset
table inet incus {
chain prert.web7.eth1 {
type filter hook prerouting priority raw; policy accept;
iif "vethf14ae14b" fib saddr . iif oif missing drop
}
chain prert.web7.eth2 {
type filter hook prerouting priority raw; policy accept;
iif "vethdd65dfc7" fib saddr . iif oif missing drop
}
}