Pess remote LAN network to LXC in Proxmox (L2)

There are dozens of devices in customer LANs. The goal is to have the entire network of each customer available at the L2 layer in the LXC container or VM in proxmox. E.g. I want to use tools like arp-scan etc.

I use WireGuard VPN to establish a VPN from a client devices to a proxmox server. For L2 tunneling through WireGuard, GRE (gretap) is established.

Gretap (there are more customers so I have more gretap interfaces) are terminated directly on proxmox. Each gretap is then connected to a unique LXC container using the vmbrX bridge. My whole idea is drawn here: https://snipboard.io/f5mMHh.jpg

But this solution doesn’t work for me. E.g. gretap1 is stretched to container 101 on interface eth1_gretap (via vmbr2 bridge).

root@pve-routers:/etc/network# brctl show
bridge name     bridge id               STP enabled     interfaces
vmbr0           8000.0cc47accec9d       no              eno2
                                                        veth101i0
vmbr1           8000.c6cd6eb3ee0b       no
**vmbr2           8000.3a1f353fc751       no              gretap1**
**                                                        veth101i1**
...

If I use tcpdump over the vmbr2 or gretap1 interface on proxmox, I see L2 traffic from the customer network (ARP requests, IPv6 and others). If I use tcpdump on the interface in the LXC container, I don’t see any traffic.

Bridge as such is functional. If I put the IP on vmbr2 and the interface in the LXC container, I can see each other (ping is successful).

I don’t know what could be causing this malfunction.
Or is my proposal flawed and the desired goal is not achievable?

Thank you for help.

I’m not really following your setup, perhaps a diagram would more clearly help me understand?

Also what is the problem, no traffic in the containers?

sorry, my setup and goal is not obvious from the attached drawing? Yes, the problem is no traffic on the interface in the container. https://snipboard.io/qsX9KE.jpg

Sorry I missed that image as was in a link

Why do you have a GRETAP interface inside the container, rather than connecting the containers to the bridge via veth pair?

Please can you post the output of ip a and ip r on the host and inside the container(s) as well as the container(s) config files.

root@pve-routers:/etc/network# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:cc:ec:9c brd ff:ff:ff:ff:ff:ff
    altname enp7s0
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 0c:c4:7a:cc:ec:9d brd ff:ff:ff:ff:ff:ff
    altname enp8s0
4: eno7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:cc:ed:62 brd ff:ff:ff:ff:ff:ff
    altname enp4s0f0
5: eno8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:cc:ed:63 brd ff:ff:ff:ff:ff:ff
    altname enp4s0f1
12: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:25:2d:13:f7:dd brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc25:2dff:fe13:f7dd/64 scope link
       valid_lft forever preferred_lft forever
15: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
16: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
17: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
18: gretap1@NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 3a:1f:35:3f:c7:51 brd ff:ff:ff:ff:ff:ff
19: gretap2@NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UNKNOWN group default qlen 1000
    link/ether ee:8d:e8:25:ea:b0 brd ff:ff:ff:ff:ff:ff
20: gretap3@NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr4 state UNKNOWN group default qlen 1000
    link/ether 76:f4:93:4e:23:7d brd ff:ff:ff:ff:ff:ff
48: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:cc:ec:9d brd ff:ff:ff:ff:ff:ff
    inet 100.64.7.34/27 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 2a02:768:0:2210::3:1200/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fecc:ec9d/64 scope link
       valid_lft forever preferred_lft forever
49: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether c6:cd:6e:b3:ee:0b brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global vmbr1
       valid_lft forever preferred_lft forever
50: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 192.168.250.1/24 scope global wg0
       valid_lft forever preferred_lft forever
51: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3a:1f:35:3f:c7:51 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::381f:35ff:fe3f:c751/64 scope link
       valid_lft forever preferred_lft forever
52: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:8d:e8:25:ea:b0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec8d:e8ff:fe25:eab0/64 scope link
       valid_lft forever preferred_lft forever
53: vmbr4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 76:f4:93:4e:23:7d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::74f4:93ff:fe4e:237d/64 scope link
       valid_lft forever preferred_lft forever
54: veth101i0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:6c:55:21:1c:92 brd ff:ff:ff:ff:ff:ff link-netnsid 1
55: veth101i1@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether fe:52:1c:7d:15:95 brd ff:ff:ff:ff:ff:ff link-netnsid 1

root@pve-routers:/etc/network# ip r
default via 100.64.7.33 dev vmbr0 proto kernel onlink
100.64.7.32/27 dev vmbr0 proto kernel scope link src 100.64.7.34
192.168.1.0/24 dev vmbr1 proto kernel scope link src 192.168.1.1 linkdown
192.168.250.0/24 dev wg0 proto kernel scope link src 192.168.250.1

container:

root@CT101:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: eth0@if54: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3e:5e:fd:47:35:31 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2a02:768:0:2210:3c5e:fdff:fe47:3531/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 2591898sec preferred_lft 604698sec
    inet6 fe80::3c5e:fdff:fe47:3531/64 scope link 
       valid_lft forever preferred_lft forever
6: eth1_gretap@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f6:d0:3f:38:e1:c0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::f4d0:3fff:fe38:e1c0/64 scope link 
       valid_lft forever preferred_lft forever

root@CT101:~# ip r
root@CT101:~#

Hi, my problem is solved.
it was a combination of several causes

  1. my bridge interface was not active.
  2. tcpdump used for debugging in the container did not work quite right. There are many problems described on the internet with tcpdump in an unprivileged container (I’m using ubuntu, so apparmor). I solved it by making my containers “privileged”.

Here is a sample of my script that creates a running LXT container with the gretap interface from a template.

LXC_TEMPLATE_ID=101
LXC_ID=105
WG_LOCAL_IP="192.168.250.1"
WG_REMOTE_IP="192.168.250.2"

brctl addbr vmbr$LXC_ID

ip link add gretap$LXC_ID type gretap local $WG_LOCAL_IP remote $WG_REMOTE_IP
ip link set dev gretap$LXC_ID up
brctl addif vmbr$LXC_ID gretap$LXC_ID

ip link set up vmbr$LXC_ID

pct clone $LXC_TEMPLATE_ID $LXC_ID --full --hostname XXX --description "GRETAP $WG_LOCAL_IP (here) <--> remote $WG_REMOTE_IP"
pct set $LXC_ID --memory 256 --swap 256
pct set $LXC_ID --net1 name=eth1_gretap,bridge=vmbr$LXC_ID

# můžu spustit nějaký skript, který je asi i vně: pct set 100 -hookscript local:snippets/hookscript.pl

pct start $LXC_ID

read -p "Press any key to resume ..."

# stop and delete container
pct stop $LXC_ID
pct destroy $LXC_ID --purge

ip link set down vmbr$LXC_ID
brctl delif vmbr$LXC_ID gretap$LXC_ID
brctl delbr vmbr$LXC_ID

ip link delete gretap$LXC_ID

Thank you.

1 Like

Glad you figured it out :slight_smile: