How to use a gateway not in same network of the container ip address

Some big hosting company (OVH, Online.net) use this way to add more public IP for their dedicated servers.

My context is unprivileged LXC 3 containers.

lxc.net.0.type = veth
lxc.net.0.hwaddr = 00:16:3E:Ea:32:a6
lxc.net.0.link = lxcbr15
lxc.net.0.ipv4.address = 1.2.3.4/32
lxc.net.0.ipv4.gateway = 172.16.0.15
lxc.net.0.flags = up

With this configuration I’m not reaching 172.16.0.15 (lxcbr15) from container (1.2.3.4/32) ; it seems because 172.16.0.15 isn’t part of 1.2.3.4/32 … but it’s the right things to do in my case and LXC routes seem to be there.

[vps]$ ip route show
default via 172.16.0.15 dev eth0 
172.16.0.15 dev eth0 scope link

How is your lxcbr15 configured, is it connected to the external network? And can you confirm whether your hosting provider allows multiple MAC addresses on each physical external port?

[host]$ cat /etc/network/interfaces.d/lxcbr15
auto lxcbr15
iface lxcbr15 inet static
        bridge_ports none
        bridge_fd 0
        bridge_stp off
        bridge_maxwait 0
        address 172.16.0.15/24
        gateway 172.16.0.1

172.16.0.1 is my own path to WAN.

So is the host’s WAN port connected to lxcbr15?

I’m trying to get an idea of your LAN layout, and understand what you’re trying to achieve.

Is your ISP requiring you to do this, or are you doing it out of choice for your network design?

Also can you show output of ip a and ip r on the host please?

I’m trying to give containers a single network device (eth0) with a single IP address (public address from Internet frontend host). It’s for my network design.

I have no problem to route to/from traffic Internet if container’s IP address is in same subnet as container’s bridge. But I need assign public address to container when my LXC hoster computer is in a LAN environment.

I see, so the external IP traffic is already being routed to your LXC host?

In that case I would suggest using the router NIC veth mode, which feels well suited to this, as it allows using external IPs inside a container without the need for a bridge.

See lxc.net.[i].type in https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html

So, first, remove the bridge entirely.

Then use this in your container config:

lxc.net.0.type = veth
lxc.net.0.veth.mode = router
lxc.net.0.l2proxy = 1 # This enables proxy ARP/proxy NDP advertisement, not needed if IPs routed to host's LAN address already
lxc.net.0.link = eth1 # External interface (not the bridge)
lxc.net.0.flags = up
lxc.net.0.name = eth0
lxc.net.0.script.up = /usr/share/lxc/hooks/lxc-router-up
lxc.net.0.ipv4.gateway = 169.254.0.1 # This is a link-local next-hop address for IPv4
lxc.net.0.ipv6.gateway = fe80::1 # This is a link-local next-hop address for IPv6
lxc.net.0.ipv4.address = n.n.n.n.n/32 # Single IPv4 address
lxc.net.0.ipv6.address = .../128 # Single IPv6 address

Contents of /usr/share/lxc/hooks/lxc-router-up:

#!/bin/sh
if [ -z "${LXC_NET_PEER}" ]
then
        echo "LXC_NET_PEER not set"
        exit 1
fi

sysctl net.ipv6.conf."${LXC_NET_PEER}".autoconf=0
sysctl net.ipv6.conf."${LXC_NET_PEER}".accept_dad=0
sysctl net.ipv6.conf."${LXC_NET_PEER}".accept_ra=0
sysctl net.ipv6.conf."${LXC_NET_PEER}".dad_transmits=0
sysctl net.ipv6.conf."${LXC_NET_PEER}".addr_gen_mode=1
ip a flush local dev "${LXC_NET_PEER}" scope link
ip a add fe80::1/64 dev "${LXC_NET_PEER}"
ip a add 169.254.0.1 dev "${LXC_NET_PEER}"

You’ll also need to ensure that the OS inside the container doesn’t remove the IPs configured by LXC, such as when triggering DHCP client request. And you’ll need to ensure DNS resolver IPs are configured manually.

Thank you for kind explanation and config details. As I said, my context is LXC 3.0 unprivileged containers, and this version does not seeem to support lxc.net.[i].veth.mode
I need this to work on Debian 10 Stable (LXC 3) and if possible on Debian 9 (LXC 2) too.

Hrm, yes I didn’t see that part.

Well you could use just a normal veth pair, without a link property (that would connect it to a bridge) and instead setup the equivalent settings as router mode does using the various hooks LXC provides.

Basically what router mode does is:

  1. Adds a static route on the host to the container’s IP address pointing towards the container’s veth interface on the host.
  2. If needed, adds proxy ARP/proxy NDP entries ip neigh proxy to advertise the address on the external l2, but I suspect this isn’t needed in your case as the addresses are in different subnet to the LAN and are AFAIK already routed to the host.
  3. Adds link-local addresses to host-side veth interface so the container’s network config can use those addresses as the next-hop address for the default gateway inside the container.

Or you can use LXD which has a snap package that works with Debian and has a NIC type called routed which achieves the same thing.

I’m trying to do the same thing here, and I cannot figure out how to make the container to communicate with the outside world. I have the same need: only a single IPv4 public ip address assigned to the host interface, and I do not want to setup a new host interface (bridge) for connecting the container to the outside world.

Host OS: Alpine Linux 3.15 running lxc-4.0.11
Guest OS: Alpine Linux 3.15

First it seems that $LXC_NET_PEER env var isn’t set, even though I’ve added lxc.hook.version = 1:

lxc-start test-packer 20220120094058.398 INFO     conf - conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/hooks/lxc-router-up" for container "test-packer", config section "net"
lxc-start test-packer 20220120094058.400 DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/lxc-router-up test-packer net up veth bond0.104 vethrN8BT4 produced output: LXC_NET_PEER not set
lxc-start test-packer 20220120094058.400 ERROR    conf - conf.c:run_buffer:321 - Script exited with status 1

This is not a big deal, I’ve performed manually the lxc-router-up script changes on the host (only IPv4 changes, not using IPv6 in the container).

The problem is the container’s network, what should be the network config? After the container started eth0 has no IP address assigned. I’ve manually added the IP specified in the lxc config. Is this supposed to be done manually or automatically? What should look like the container’s routing table? Adding eth0 as default route allows the container to ping the host’s veth IP address, but nothing beyond that of course. The container was created using the download template. Posting all lxc and network configs.

Host:

cat /var/lib/lxc/test-packer/config 

# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64

# Container specific configuration
lxc.hook.version = 1 # require for the script
lxc.rootfs.path = dir:/var/lib/lxc/test-packer/rootfs
lxc.uts.name = test-packer

# Network configuration
lxc.net.0.type = veth
lxc.net.0.veth.mode = router
lxc.net.0.link = bond0.104
lxc.net.0.ipv4.address = 192.168.1.10/32
lxc.net.0.ipv4.gateway = 169.254.0.1
lxc.net.0.name = eth0
lxc.net.0.flags = up
lxc.net.0.l2proxy = 1
#lxc.net.0.script.up = /usr/share/lxc/hooks/lxc-router-up

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d8:9d:67:6a:bc:a8 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d8:9d:67:6a:bc:a8 brd ff:ff:ff:ff:ff:ff permaddr d8:9d:67:6a:bc:ac
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d8:9d:67:6a:bc:a8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::da9d:67ff:fe6a:bca8/64 scope link 
       valid_lft forever preferred_lft forever
8: bond0.104@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d8:9d:67:6a:bc:a8 brd ff:ff:ff:ff:ff:ff
    inet <redacted>/27 scope global bond0.104
       valid_lft forever preferred_lft forever
    inet6 <redacted>/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::da9d:67ff:fe6a:bca8/64 scope link 
       valid_lft forever preferred_lft forever
53: vethNMy6RD@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:a6:95:cd:a1:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 15
    inet 169.254.0.1/32 scope global vethNMy6RD
       valid_lft forever preferred_lft forever

ip r

default via <redacted> dev bond0.104 
<redacted>/27 dev bond0.104 proto kernel scope link src <redacted>
192.168.1.10 dev vethNMy6RD scope link

sysctl net.ipv4.conf.bond0.104.forwarding

net.ipv4.conf.bond0/104.forwarding = 1

sysctl net.ipv4.conf.vethNMy6RD.proxy_arp

net.ipv4.conf.vethNMy6RD.proxy_arp = 1

Container:

test-packer:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if53: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 3a:2d:4f:d5:ca:fa brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.10/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::382d:4fff:fed5:cafa/64 scope link 
       valid_lft forever preferred_lft forever
test-packer:/# ip r
test-packer:/#