LXD using IPVLAN for public ip alias

Hi @tomp

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 5000
    link/ether MAC_ADD brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether MAC_ADD brd ff:ff:ff:ff:ff:ff
    inet HOST_IP/26 brd HOST_BRD scope global br0
       valid_lft forever preferred_lft forever
    inet ALIAS_2/29 brd ALIAS_2_BRD scope global secondary br0
       valid_lft forever preferred_lft forever
4: testing: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether e2:02:33:e2:23:19 brd ff:ff:ff:ff:ff:ff
    inet 11.11.11.1/24 scope global testing
       valid_lft forever preferred_lft forever
    inet6 fe80::e002:33ff:fee2:2319/64 scope link 
       valid_lft forever preferred_lft forever
ip r
default via HOST_BRD dev br0 
11.11.11.0/24 dev testing proto kernel scope link src 11.11.11.1 
IP_ALIAS_SUBNET_ENDING_WITH0/29 dev br0 proto kernel scope link src IP_ALIAS_SUBNET_STARTING_WITH_0 
ALIAS_1 dev lo scope link
HOST_BRD/26 dev br0 proto kernel scope link src HOST_IP 


ip neigh show proxy
ALIAS_1 dev br0  proxy

NOTE:
IP_ALIAS_SUBNET_ENDING_WITH0/29
all added on ip aliases /32

_____________________________________________________________________________

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
16: eth0@if3: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether MAC_AD brd ff:ff:ff:ff:ff:ff
    inet IP_ALIAS/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 IPV6/64 scope link 
       valid_lft forever preferred_lft forever
 ip r
default via IP_ALIAS_SUBNET_STARTING_WITH_0 dev eth0 proto static onlink 

EDITE:
i can ping it from outside.
i can ping google.com from within the container => dns working.
nginx not accessible from outside.

ufw status
Status: inactive

Can you show output of netstat -tlpn in container running nginx please.

netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 ALIAS_1:80         0.0.0.0:*               LISTEN      247/nginx: master p 
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      176/systemd-resolve 
tcp6       0      0 :::80                   :::*                    LISTEN      247/nginx: master p 

OK so can you try using tcpdump to check where the packets are being dropped:

First on the host:

tcpdump -l -nn -i br0 host ALIAS_1 and port 80

Then inside the container:

tcpdump -l -nn -i eth0 port 80

For each try and access the nginx server from an external host.

Nothing from container side happens.
but host yes.

sudo tcpdump -l -nn -i eth0 host ALIAS_1 and port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:36:11.784556 IP HOME_IP.39583 > ALIAS_1.80: Flags [S], seq 3790403193, win 29200, options [mss 1452,sackOK,TS val 146033351 ecr 0,nop,wscale 7], length 0
15:36:15.905561 IP HOME_IP.38826 > ALIAS_1.80: Flags [S], seq 1568600176, win 29200, options [mss 1452,sackOK,TS val 146037470 ecr 0,nop,wscale 7], length 0
15:36:16.947811 IP HOME_IP.38552 > ALIAS_1.80: Flags [S], seq 53506293, win 29200, options [mss 1452,sackOK,TS val 146038514 ecr 0,nop,wscale 7], length 0
15:36:17.860019 IP HOME_IP.39499 > ALIAS_1.80: Flags [S], seq 853040858, win 29200, options [mss 1452,sackOK,TS val 146039416 ecr 0,nop,wscale 7], length 0

So hereā€™s my test setup:

lxc init ubuntu:18.04 cipvlan
lxc config device add cipvlan eth0 nic nictype=ipvlan ipv4.address=192.168.1.200 parent=enp3s0
lxc start cipvlan

Then inside container, modify netplan config to:

network:
    version: 2
    ethernets:
      eth0:
        addresses:
          - 192.168.1.200/32
        nameservers:
          addresses:
          - 8.8.8.8
        routes:
          - to: 0.0.0.0/0
            via: 169.254.0.1
            on-link: true
netplan apply

Check ping:

lxc exec cipvlan -- ping 8.8.8.8 -c 5
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=24.0 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=23.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=58 time=23.9 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=58 time=23.8 ms

--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 23.825/23.952/24.049/0.079 ms

Install nginx

lxc exec cipvlan -- apt install nginx
lxc exec cipvlan -- systemctl start nginx
lxc exec cipvlan -- netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      728/nginx: master p 
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      139/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      199/sshd            
tcp6       0      0 :::80                   :::*                    LISTEN      728/nginx: master p 
tcp6       0      0 :::22                   :::*                    LISTEN      199/sshd 

Now, importantly, ipvlan does not allow the container and the host to communicate with each other, so checking that nginx is accessible from the host is not going to work.

Instead I go to a different PC on the hostā€™s physical network and run:

curl -I http://192.168.1.200
 curl -I http://192.168.1.200
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 19 Mar 2020 09:53:29 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 19 Mar 2020 09:49:27 GMT
Connection: keep-alive
ETag: "5e734027-264"
Accept-Ranges: bytes

Good, its working.

Letā€™s see how that looked with tcpdump inside the container:

lxc exec cipvlan -- tcpdump -l -nn -i eth0 port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
09:54:33.770992 IP 192.168.1.200.80 > 192.168.1.2.54828: Flags [S.], seq 1839355293, ack 352489863, win 65160, options [mss 1460,sackOK,TS val 3790088148 ecr 3985982135,nop,wscale 7], length 0
09:54:33.771390 IP 192.168.1.200.80 > 192.168.1.2.54828: Flags [.], ack 79, win 509, options [nop,nop,TS val 3790088148 ecr 3985982136], length 0
09:54:33.771706 IP 192.168.1.200.80 > 192.168.1.2.54828: Flags [P.], seq 1:248, ack 79, win 509, options [nop,nop,TS val 3790088148 ecr 3985982136], length 247: HTTP: HTTP/1.1 200 OK
09:54:33.772404 IP 192.168.1.200.80 > 192.168.1.2.54828: Flags [F.], seq 248, ack 80, win 509, options [nop,nop,TS val 3790088149 ecr 3985982136], length 0

So we can see that tcpdump is working inside the container and can see the request arriving.

So I would suggest double checking that your hostā€™s firewall doesnā€™t have any DNAT rules or anything else that could be blocking the request, also double check you donā€™t have any proxy devices added to your instances that could be interfering with inbound requests on port 80.

The other thing I was thinking about this is perhaps the issue is due to the relationship between ipvlan and your existing br0 bridge interface.

Can you explain why do you have the br0 interface, rather than having the hostā€™s public IP on the hostā€™s eth0 interface?

Youā€™ve correctly, specified the LXD ipvlan deviceā€™s parent as br0, but perhaps it would be worth trying specifying the ipvlan deviceā€™s parent as eth0, as perhaps the IPVLAN network hooks in the kernel are interacting badly with the bridge hooks.

Hi @tomp, thank you for your help so far. I have tested on 2 separate servers with identical configurations. 1 with br0, and 1 with eth0 default interface.
both tries fail. also have tried at home, same result. in all cases the container have internet activity itself, but can not access it from outside.
you trying to prove me it works, i appreciate your help. but its not working for me.
Thank you.

Yes indeed, so there must be something different about your environments, the challenge is figuring out what that is :slight_smile:

What OS, kernel version are you running on host and inside containers?

If you like I could try logging into one of your test systems to try it?

Opensuse 4.12.14-lp151.28.40-default on the host.
ubuntu for containers.
Yes, am trying hard to figure it out.

Iā€™d be interested to see if the routed NIC type works for you:

lxc config device add cipvlan eth0 nic nictype=routed ipv4.address=192.168.1.200 parent=enp3s0

The same netplan config used with ipvlan will be fine.

Confirmed issue was firewalld blocking inbound HTTP requests.

Worth noting that because ipvlan works before the hostā€™s routing table, when adding firewalls one must use the INPUT chain rather than the FORWARD chain (unlike routed NIC types which need the latter to be used).

2 Likes

I apologize for digging up this topic, but I just canā€™t get the behavior I want.

A container that received an IP via ipvlan is available within my local network only when UFW is disabled, but I want to achieve such that this container can be accessed only on ports 22 and 80 (possibly some other in the future). That being said, I want UFW to remain enabled.

I just canā€™t figure out how to configure special UFW rules specifically for this container (and is it possible).

Is UFW running on the LXD host or the container?

UFW is running on the host.

It is important for me that UFW is running at least on the host.

So unlike using routed or bridged NIC types, the ipvlan NIC type will get filtered in the INPUT and OUTPUT chains of your firewall I believe (rather than FORWARD like the other NIC types mentioned).

So you need to add the relevant rules to those chains as if they were local IPs.

The alternative is to use routed NIC type which behaves similarly to ipvlan except it allows communication with the host and will use the FORWARD chain of your firewall.

See How to get LXD containers get IP from the LAN with routed network