Network forward routing

I have assigned an exclusive IPv4 to a container.
Host: 185.216.xxx.0/27 public IP4 on eno4

ip r

default via 185.216.xxx.1 dev eno4 proto static
185.216.xxx.0/27 dev eno4 proto kernel scope link src 185.216.xxx.2
10.0.4.0/23 dev lxdbr0 proto kernel scope link src 10.0.4.1

All containers on lxdbr0 use first assigned IP from that block: 185.216.xxx.2

Now by assigning a network forward to a container:

lxc network forward create lxdbr0 185.216.xxx.5 target_address=10.0.5.43

nft list table inet lxd

        chain fwdprert.lxdbr0 {
                type nat hook prerouting priority dstnat; policy accept;
                ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
        }

        chain fwdout.lxdbr0 {
                type nat hook output priority -100; policy accept;
                ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
        }

        chain fwdpstrt.lxdbr0 {
                type nat hook postrouting priority srcnat; policy accept;
                ip saddr 10.0.5.43 ip daddr 10.0.5.43 masquerade
        }

All inbound traffic to 185.216.xxx.5 is forwarded to 10.0.5.43 as expected.
But container outbound chooses lxdbr0 default IP route 185.216.xxx.2

container ip r:

default via 10.0.4.1 dev eth0 proto dhcp src 10.0.5.43 metric 100
10.0.4.0/23 dev eth0 proto kernel scope link src 10.0.5.43
10.0.4.1 dev eth0 proto dhcp scope link src 10.0.5.43 metric 100

This way cant have a container which in/outbound identifies with exclusive IP, essential for some services (rDNS) …
Is there a workaround for this?

Yes, don’t use lxc network forward (which is DNAT) but instead use a routed NIC instead of connecting the instance to the lxdbr0 bridge.

See Type: nic - LXD documentation

And How to get LXD containers get IP from the LAN with routed network

Thanks.

Last conversation:

But since source based forwarding (1:1) has been introduced, I thought it might be the easiest way to link a container to an external public IP.
But logically, it is only a one way road.

Routed seems pretty much manual work per container, also involves os specific cloud-init, an individual profile or config for each container etc.

Quite hard to integrate those manual steps into an automated environment.

Where has source based forwarding been introduced?

I think this may be what you’re looking for

Although that still has the disadvantage that the container won’t know what its own external IP is.
Which is the big advantage of routed NIC.

You don’t have to use cloud-int you can use what ever host configuration system you want to use to set the static IP inside the container.

This rules set works in same environment, container on lxdbr0, forwards all incoming traffic to container eth0 and all outgoing advertised as assigned IP and not as lxdbr0 gateway (host’s default IP).
Right now for client’s containers we create an NFT set each to assign a dedicated IP4 (in/outbound). Network forwards can be a replacement for this purpose, just missing the outbound routing.

#!/usr/sbin/nft -f

define client = hc1
define client_pub = 185.216.xxx.5
define client_priv = 10.0.5.43

table inet hc1 {
        chain fwdprert.hc1 {
                type nat hook prerouting priority dstnat; policy accept;
                ip daddr $client_pub dnat to $client_priv
        }

        chain fwdpstrt.hc1 {
                type nat hook postrouting priority srcnat; policy accept;
                ip saddr $client_priv snat to $client_pub
        }

        chain fwdin.hc1 {
                type nat hook input priority 100; policy accept;
        }

        chain fwdout.hc1 {
                type nat hook output priority -100; policy accept;
        }

Cant same be achieved with network forwards?
Network forwards rule set by now:

    chain fwdprert.lxdbr0 {
            type nat hook prerouting priority dstnat; policy accept;
            ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
    }

    chain fwdout.lxdbr0 {
            type nat hook output priority -100; policy accept;
            ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
    }

    chain fwdpstrt.lxdbr0 {
            type nat hook postrouting priority srcnat; policy accept;
            ip saddr 10.0.5.43 ip daddr 10.0.5.43 masquerade
    }

I managed to get IPv6 and IPv4 from the Lan as routed network.

Instance
eth0 on lxdbr0 managed bridge.

Instance config device
  eth1:
    ipv4.address: 185.216.xxx.2
    ipv6.address: 2a0b:bbc0:xxxx::101:1ce
    nictype: routed
    parent: eno4
    type: nic

After applying the device eth1 config, any attempts inside container like service networking reload or ip link set eth1 up … has failed.
But restarting the container did it.

lxc lsa
±----------±--------±---------------------±------------------------------±----------------±----------±-----------±--------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | PROFILES | PROJECT |
±----------±--------±---------------------±------------------------------±----------------±----------±-----------±--------+
| DB | RUNNING | 185.216.xxx.2 (eth1) | 2a0b:bbc0:xxx::101:1ce (eth1) | CONTAINER | 0 | pl4 | 101 |
| | | 10.0.5.90 (eth0) | | | | | |
±----------±--------±---------------------±------------------------------±----------------±----------±-----------±--------+

This worked without any changes to network configuration inside container, no static IP assiggnments so far inside container. It can ping(6) in and out.

Is there another way of getting the new device eth1 applying to container without an lxd restart container?

What errors to what commands did you get?

Can you show a reproducer? I’m not following you.

After applying this to container eth1, lxd ls is not showing those new IP’s assigned and container unaware of.
An lxc restart container, will make lxc ls and container aware of new config.
Is there a way to achieve that without a restart?
I have tried
service networking restart
ip link set eth1 down/up
inside container, it hasent work.

Ah right I see.

Sadly LXD can only add IP and default route configuration for routed NICs at instance start time.
This is because it leverages liblxc’s functionality to configure the NIC interface inside the container’s network namespace after it is created before the guest OS is started.

When the container is running liblxc only currently allows the interface to be moved into the container’s network namespace and IPs on it are removed.

However, as you have seen, the host side setup is completed, so if the guest then configures the new interface it does work OK.