How to map a host veth interface to a container interface?

Hello everyone,

I had assume I could do this with matching by ether address, though as an example below, I can’t seem to find matching addresses? Is there alternative way to accomplish this?

Inside CT:

18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:91:bf:c0:00:54 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.16.2.2/24 brd 172.16.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::491:bfff:fec0:54/64 scope link
       valid_lft forever preferred_lft forever

On host:

8: veth102i0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:71:16:0f:8a:76 brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: veth104i0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:7f:07:29:62:07 brd ff:ff:ff:ff:ff:ff link-netnsid 2
11: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 22:ae:08:12:30:65 brd ff:ff:ff:ff:ff:ff
13: veth100i0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:05:a0:ce:9e:32 brd ff:ff:ff:ff:ff:ff link-netnsid 0
19: veth105i0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:ec:36:b2:0c:58 brd ff:ff:ff:ff:ff:ff link-netnsid 3
21: veth113i0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr113i0 state UP group default qlen 1000
    link/ether fe:0a:9c:16:fb:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 4
22: fwbr113i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:2b:66:46:02:fc brd ff:ff:ff:ff:ff:ff
23: fwpr113p0@fwln113i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 5e:3d:ef:1f:a5:2a brd ff:ff:ff:ff:ff:ff
24: fwln113i0@fwpr113p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr113i0 state UP group default qlen 1000
    link/ether aa:2b:66:46:02:fc brd ff:ff:ff:ff:ff:ff
28: veth107i0@if27: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master vmbr0 state LOWERLAYERDOWN group default qlen 1000
    link/ether fe:03:89:c4:b0:bc brd ff:ff:ff:ff:ff:ff link-netnsid 5

Nevermind, I figured it out being a Proxmox hypervisor. Proxmox encodes the container ID in the veth interface name.

Sorry but i have this question too and i dont use Proxmox. Is there a simple way to got the hostnode veth part of a container (without login into container itself).

PS: I know that i can give names for this devices but this ends somethimes in problems during container reboot so it is no option.

@lipkowski.be lxc info <container name> works for LXD (shown next to IP address).

@lipkowski.be for LXC lxc-info <container name> works too.

Hi,

i use LXD. I have checked your proposal but this show me only the if name inside the container (eth0) and not the part on the hostnode vethXXXX@ifXX .

@lipkowski.be are you running 3.15?

On my system it shows the host name:

lxc info c1
Name: c1
Location: none
Remote: unix://
Architecture: x86_64
Created: 2019/07/17 14:19 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 11211
Ips:
  eth0:	inet	10.237.30.2	vethKOZ4FZ
  eth0:	inet6	fd42:6d1a:3c9c:d398:216:3eff:fe07:d1e9	vethKOZ4FZ
  eth0:	inet6	fe80::216:3eff:fe07:d1e9	vethKOZ4FZ
  lo:	inet	127.0.0.1
  lo:	inet6	::1

@tomp
i use version 3.14 this can be the problem. I check it.

@lipkowski.be ah ok, this got fixed in 3.15.

I think this
kernel_features:
netnsid_getifaddrs: “false”

is the problem.

@lipkowski.be we actually moved away from using that and store it in volatile data instead.

Until you move up to 3.15, you can use

lxc config get c1 volatile.eth0.host_name

However that may change in the future.

@tomp thank you this works.