I have assigned an exclusive IPv4 to a container.
Host: 185.216.xxx.0/27 public IP4 on eno4
ip r
default via 185.216.xxx.1 dev eno4 proto static
185.216.xxx.0/27 dev eno4 proto kernel scope link src 185.216.xxx.2
10.0.4.0/23 dev lxdbr0 proto kernel scope link src 10.0.4.1
All containers on lxdbr0 use first assigned IP from that block: 185.216.xxx.2
Now by assigning a network forward to a container:
chain fwdprert.lxdbr0 {
type nat hook prerouting priority dstnat; policy accept;
ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
}
chain fwdout.lxdbr0 {
type nat hook output priority -100; policy accept;
ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
}
chain fwdpstrt.lxdbr0 {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 10.0.5.43 ip daddr 10.0.5.43 masquerade
}
All inbound traffic to 185.216.xxx.5 is forwarded to 10.0.5.43 as expected.
But container outbound chooses lxdbr0 default IP route 185.216.xxx.2
container ip r:
default via 10.0.4.1 dev eth0 proto dhcp src 10.0.5.43 metric 100
10.0.4.0/23 dev eth0 proto kernel scope link src 10.0.5.43
10.0.4.1 dev eth0 proto dhcp scope link src 10.0.5.43 metric 100
This way cant have a container which in/outbound identifies with exclusive IP, essential for some services (rDNS) …
Is there a workaround for this?
But since source based forwarding (1:1) has been introduced, I thought it might be the easiest way to link a container to an external public IP.
But logically, it is only a one way road.
Routed seems pretty much manual work per container, also involves os specific cloud-init, an individual profile or config for each container etc.
Quite hard to integrate those manual steps into an automated environment.
This rules set works in same environment, container on lxdbr0, forwards all incoming traffic to container eth0 and all outgoing advertised as assigned IP and not as lxdbr0 gateway (host’s default IP).
Right now for client’s containers we create an NFT set each to assign a dedicated IP4 (in/outbound). Network forwards can be a replacement for this purpose, just missing the outbound routing.
#!/usr/sbin/nft -f
define client = hc1
define client_pub = 185.216.xxx.5
define client_priv = 10.0.5.43
table inet hc1 {
chain fwdprert.hc1 {
type nat hook prerouting priority dstnat; policy accept;
ip daddr $client_pub dnat to $client_priv
}
chain fwdpstrt.hc1 {
type nat hook postrouting priority srcnat; policy accept;
ip saddr $client_priv snat to $client_pub
}
chain fwdin.hc1 {
type nat hook input priority 100; policy accept;
}
chain fwdout.hc1 {
type nat hook output priority -100; policy accept;
}
Cant same be achieved with network forwards?
Network forwards rule set by now:
chain fwdprert.lxdbr0 {
type nat hook prerouting priority dstnat; policy accept;
ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
}
chain fwdout.lxdbr0 {
type nat hook output priority -100; policy accept;
ip daddr 185.216.xxx.5 dnat ip to 10.0.5.43
}
chain fwdpstrt.lxdbr0 {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 10.0.5.43 ip daddr 10.0.5.43 masquerade
}
After applying the device eth1 config, any attempts inside container like service networking reload or ip link set eth1 up … has failed.
But restarting the container did it.
This worked without any changes to network configuration inside container, no static IP assiggnments so far inside container. It can ping(6) in and out.
Is there another way of getting the new device eth1 applying to container without an lxd restart container?
After applying this to container eth1, lxd ls is not showing those new IP’s assigned and container unaware of.
An lxc restart container, will make lxc ls and container aware of new config.
Is there a way to achieve that without a restart?
I have tried service networking restart ip link set eth1 down/up
inside container, it hasent work.
Sadly LXD can only add IP and default route configuration for routed NICs at instance start time.
This is because it leverages liblxc’s functionality to configure the NIC interface inside the container’s network namespace after it is created before the guest OS is started.
When the container is running liblxc only currently allows the interface to be moved into the container’s network namespace and IPs on it are removed.
However, as you have seen, the host side setup is completed, so if the guest then configures the new interface it does work OK.