Public IP and ipv4.routes

Hi, I’m using ipv4.routes for my (public) Subnet.
Works well in a way, as all Containers are reachable. (Setting the public IP in the Container.). The lxdbr0 assigns additional IPv4 addresses. How do I make sure the container uses only my IP-Address.
Now it uses the internal IPv4 for outgoing packets. This pakets are SNATet to the HostIP.
I played with deleting the “wrong” IP. While I was able to ping the host and the host was able to ping the container. I was not able to leave the host (from the container) anymore. Even setting the default route to the interface.

Hi @erkan_yanar depending on your specific configuration, have you tried turn off NAT mode on the lxd bridge?

ipv4.nat boolean ipv4 address false
Whether to NAT (will default to true if unset and a random ipv4.address is generated)

Hi @tomp,
NAT is implemented using iptables and Masquerading the specific
The address in ipv4.routes: is not cached by this rules.
The container takes the IP from by default for outgoing traffic.

That’s why I deleted (testing) the “internal” IP.
From there on I was able to ping the PublicContainerIP from the host and vice versa. But my packets had not been able to leave the host.
ipforwarding is activated and the host routes should work fine also. At least I can read anything from the host.


Are you able to set the public IPv4 address as preferred source (src) in the routes on the container?

@mikma that is something you can specify inside the container (presumably you are also specifying the public IP inside the container too), the way to persist it will vary by distribution.

Take a look at

Please can you provide a description of your network setup, how many public IPs do you have and how are they routed from the wider network to your host machine?


Ive got a rootserver i.e IP:
And another subnet:
The subnet ist routet by the “cloud” provider to the host.
Attaching the Subnet-IPs to the Host or to the containers ( using ipv4.routes)
works fine.

So this works, but you might not like the amount of config you need to use.

In my setup the “cloud provider” was just my router with a static route to the “public IP” of routed to my LXD host’s IP (also subtle note here, does your provider route your additional public IPs to your host’s primary IP or does it require layer 2 ARP resolution for the public IPs on your host?).

Anyway, assuming the former method, then this works:

lxc network set lxdbr0 ipv4.nat false
lxc init ubuntu:18.04 c1
lxc config device add c1 eth0 nic nictype=bridged parent=lxdbr0 ipv4.address= ipv4.routes= (ipv4.address here is just one from lxdbr0 subnet picked randomly)
lxc start c1
lxc exec c1 ip a add dev eth0 (adds public routed IP to container)
lxc exec c1 ip a add dev eth0 (add public IP as alias inside container - you would need to persist this using distro specific config)
lxc exec c1 ip r add default via metric 1 src (adds default route override with higher metric and source address specified)

Note, this required disabling NAT entirely on the network, which means any other containers without this additional config would not be able to reach the Internet without additional manual SNAT rules added.

If you don’t need the containers to contact the host, then you could look at using ipvlan which could be a lot easier to configure:

lxc init ubuntu:18.04 c1
lxc config device add c1 eth0 nic nictype=ipvlan parent=enp3s0 ipv4.address=
lxc start c1

No need for lxdbr0 or its internal IPs at all then.

1 Like


if you are in ubuntu18.04
just try configuration /etc/netplan/50-cloud-init.yaml

with network:
- public ip CIDR
dhcp4: false
gateway4: Public ip gateway
- workgroup
version: 2

then neplan apply .

Make sure , local network with must be bridge

Will work , its worked for me .

1 Like