Public IP for container on different subnet than gateway

I got the following setup:
Multiple IPs on 1 eth interface on hetzner cloud instance.

I want to use an additional IP on one of the containers so that this IP is natively used in the container.

ip a of host:

||1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000|
|---|---|
||    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00|
||    inet 127.0.0.1/8 scope host lo|
||       valid_lft forever preferred_lft forever|
||    inet6 ::1/128 scope host|
||       valid_lft forever preferred_lft forever|
||2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000|
||    link/ether 96:00:00:4f:dc:a4 brd ff:ff:ff:ff:ff:ff|
||3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000|
||    link/ether 96:00:00:4f:dc:a4 brd ff:ff:ff:ff:ff:ff|
||    inet CONTAINER IP/32 scope global br0|
||       valid_lft forever preferred_lft forever|
||    inet MAIN IP /32 scope global **dynamic** br0|
||       valid_lft 85827sec preferred_lft 85827sec|
||4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000|
||    link/ether 00:16:3e:38:e6:9e brd ff:ff:ff:ff:ff:ff|
||    inet 10.249.6.1/24 scope global lxdbr0|
||       valid_lft forever preferred_lft forever|
||    inet6 fd42:f6fa:3d23:f44a::1/64 scope global|
||       valid_lft forever preferred_lft forever|
||    inet6 fe80::216:3eff:fe38:e69e/64 scope link|
||       valid_lft forever preferred_lft forever|

ip r of host:

default via 172.31.1.1 dev br0 proto dhcp src MAIN PUBLIC IP metric 100
10.249.6.0/24 dev lxdbr0 proto kernel scope link src 10.249.6.1
172.31.1.1 dev br0 proto dhcp scope link src MAIN PUBLIC IP metric 100

I added an eth0 device on the container:
lxc config device add C1 eth0 nic nictype=bridged parent=br0

In the container I got the following netplan config:
network:
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- CONTAINER IP/32
nameservers:
addresses:
- 1.1.1.1
gateway4: 172.31.1.1

Now as the gateway is not reachable the container has no access to the outside. How can I give lxc the hint the the gateway is reachable via the MAIN PUBLIC IP? Thanks.

Hetzner has been mentioned a few times in the past on the forum. Because of their policy of filtering MAC addresses and only allowing the external interface’s MAC address, you cannot bridge onto the external interface.

Instead try removing the br0 bridge and using the routed NIC type to pass the additional IP you’ve been allocated into the container, while still using the host’s MAC address externally.

See How to get LXD containers get IP from the LAN with routed network

and https://linuxcontainers.org/lxd/docs/master/instances#nic-routed

Hi Thomas,

thanks for the fast reply.

These are cloud instances and not dedicated machines (the dedicated machines have different behaviour as you mentioned)

I solved it this way:

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: false
            addresses:
              - CONTAINER IP/32
            nameservers:
              addresses:
                - 1.1.1.1
            routes:
              - on-link: true
                to: 0.0.0.0/0
                via: 172.31.1.1

Now everything is working fine.

Kind regards,
Markus.

Excellent. We’ve seen mixed report on Hetzner’s MAC filtering, some have reported seeing it on cloud instances too.

1 Like

I rejoiced too soon. There a problems on reboot (no network or other issues). So you are right this is an issue.

I already had PRE and POST rules to get traffic to the internal address of the container and that the container is using the correct address for outgoing traffic.

Now the issue arised as I am running a container with jitsi meet and wanted to add a SIP gateway. The point here is that the jicofo service needs to connect to the external ip of the asterisk service (on the same container) so that the external ip is visible for the connection from the outside world.

The question is how can I configure iptables in a way the I can use routed networking with this use case.

Thanks :slight_smile:

The routed NIC traffic will be going through the FORWARD iptables chain (as that is the chain iptables uses for routed traffic). The normal process I use is to change the iptables policy for the FORWARD chain to accept and then add rules at the bottom of the chain to log and then drop traffic, e.g.

iptables -A FORWARD -m limit --limit 1/sec -j LOG --log-prefix "IPTables-Dropped: "
iptables -A FORWARD -j DROP

Then you can see what your firewall is dropping and add the required rules.

Thanks Thomas,
I think I am doing something wrong :slight_smile:

10.249.6.229 is the target container
br0 is the host interface bridged to host eth0
lxdbr0 is the lxd bridge

lxc network show lxdbr0                                                                                                                                                                                                     root@lxcpower
config:
  ipv4.address: 10.249.6.1/24
  ipv4.nat: "false"
  ipv6.address: fd42:f6fa:3d23:f44a::1/64
  ipv6.nat: "false"
description: ""
name: lxdbr0
type: bridge

Firewall rules:

-A PREROUTING -d PUBLIC_ADDITIONAL_IP -i br0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.249.6.229:443 
-A POSTROUTING -s 10.249.6.229/32 ! -d 10.249.6.0/24 -j SNAT --to-source PUBLIC_ADDITIONAL_IP 

Now if I am inside the container I cannot do this:
nc -zv -w5 PUBLIC_ADDITIONAL_IP 443

This results in connection refused.
Basically, what I want is to rewrite packets coming from 10.249.6.229 going to PUBLIC_ADDITIONAL_IP that they look like coming from interface br0 with destination PUBLIC_ADDITIONAL_IP and then the rules from above would also work with internal traffic. So, that services that are connecting from inside the container to other services inside the same container see the public ip as source ip.

Or could I just use:
lxc config device add videoev eth0 nic nictype=routed parent=br0 ipv4.address=PUBLIC_ADDITIONAL_IP

to solve the issue?

I’m a little confused. Originally you wanted to get the additional external IP bound into the container “so that this IP is natively used in the container”.

Maybe you have changed tack slightly since then, but the iptables rules you are using now are setting up DNAT to the container’s internal IP. This is fine, but for this to work you will need the additional external IP to be configured on the LXD host, and not inside the container. And this won’t achieve the external IP being bound in the container.

So assuming that is acceptable to you, then setting up the external IP on the LXD host should help to fix it.

But if you want the actual external IP to be usable inside the container, and considering the MAC restrictions of Hetzner, then you need to use routed NIC type and dispense with the bridge networking.

If you do go the routed NIC route, then you may not need to use a br0 bridge at all, and can just move back to using a private lxdbr0 for containers that don’t need to have external IPs, and for the routed NICs you can use the LXD host’s real external interface as the parent property.

Sorry for the confusion :slight_smile: All containers with internal IPs run with the lxbdr0 setup. Its only this single container that should get the possibility to use the external IP address from the inside.

I hope the image explains it better. I will try with routed setup and report if it worked :slight_smile:

Yeah in that case, no need for br0 interface at all, move the LXD host’s IP back onto the external interface, and then use routed NIC with external NIC as parent.

The containers connected to lxdbr0 will still be able to communicate with the routed container’s external IP too.

The container is configured this way now:

devices:
eth0:
ipv4.address: [PUBLICIP]
nictype: routed
parent: br0
type: nic

I added the following netplan config in the container:

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: false
            addresses:
              - PUBLICIP/32
            nameservers:
              addresses:
                - 1.1.1.1
            routes:
              - on-link: true
                to: 0.0.0.0/0
                via: 172.31.1.1

172.31.1.1 is the gateway on the host system.

I cannot ping any host outside.

When I do a ip r I get:

> # ip r
default via 169.254.0.1 dev eth0
default via 172.31.1.1 dev eth0 proto static onlink

which is weird as a do not set the 169.254.0.1.

Even if i delete the 169 route

ip route del default via 169.254.0.1 dev eth0

I still cannot go outside.

What do I miss?

The routed NIC docs do cover which default gateway IPs to use: Linux Containers - LXD - Has been moved to Canonical

But to summarize, you should not be putting your host’s IP as the default gateway, but using 169.254.0.1 as documented.

There’s also some examples over at:

Also I would suggest to not use the br0 interface if you’ve no other reason to have it (its just more complexity which should be avoided where possible).

Hi Thomas,
many thanks again!

I removed the PUBLIC IP from the netplan config on host, added the gateway4:169.254.0.1 in the netplan config on the container and now everything works fine. I removed the br0 bridge also. This was just too easy :wink:

Kind regards
Markus

1 Like