Give public ip one container with custom bridge

Hi People.

I have a problem and after read a lot of articles I can’t resolve it. I hope can you help me

I want give to a single container a public static ip from my hosting provider (i have multiple public ips). this way, my container acts like a VM and is reacheable over internet. For this purpose, I created a bridge interface:
auto br0
iface br0 inet static
up route add -net netmask gw dev br0
bridge_ports eth0

auto eth0
iface eth0 inet manual

But I want that my default network still are the default “lxdbr0” and only set a public ip on-demand. The default lxd bridge works fine:

lxc network show lxdbr0

  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by: []
managed: true
# lxc profile show default
config: {}
description: Default LXD profile
    nictype: bridged
    parent: lxdbr0
    type: nic
    path: /
    pool: no-sla1
    type: disk
name: default
used_by: []

I’m tried attach the bridge to container:
lxc network attach br0 c1 eth0
lxc config device set c1 eth0 ipv4.address
Also tried set the ip directly into container. Maybe should create an other network with “lxc network create” but i don’t know how configure it to use the physical device and get the public ip.

what I am doing wrong? how can give the public ip to my container without change the others containers?

ipv4.address won’t work as that property only affect the assignment of IPv4 addresses in LXD’s dnsmasq. Since your public IP is outside of the subnet on that bridge, it will be ignored.

The way you’d usually do what you need here is:

  • lxc network set lxdbr0 ipv4.routes
  • In your container, do “ip -4 addr add dev eth0”

As a result your container will have both a private IP in your subnet AND the public IPv4 address manually applied to it.

You can make this persistent by adding this to the container’s /etc/network/interfaces.d/50-cloud-init.cfg in the eth0 section:

    post-up ip -4 addr add dev eth0
    pre-down ip -4 addr del dev eth0

And then restart your container.

If it’s critical that you don’t have a private IPv4 address for the container, that should also be doable by replacing the whole eth0 section in the file and instead using:

auto eth0
iface eth0 inet static

    pre-up ip -4 link set dev eth0 up
    pre-up ip -4 route add dev eth0

Note that all the above is from memory, I’ve not actually tested any of this, so there may be some typos in there :slight_smile:

1 Like

That works! A lot of thanks Stephane.

Finally i’ve created a second subnet for that container.
There is a way to recreate this configuration without enter in the container? change the cloud-init template don’t seems a good idea, because affects to the rest of containers…

thanks again.

“lxc file push” should let you push your configuration directly into place, even before the container is started.

Just use “lxc init” instead of “lxc launch” to create the container but not have it started immediately. Then push the files you want, change any configuration you need and finally start it.

Hi there, thank you for all your work. I have a very nice and useful setup running with Bionic, Snap LXD, and a bunch of web services separated by containers using a single ip described by Simos Xenitellis here:

I use prerouting defined in /etc/ufw/before.rules to point to the HAProxy container that is using Letsencrypt. This setup gives me maximum isolation via containers, while also providing dense packaging. And I don’t have to do that via docker containers, but I still have the option to do so, because I can run docker containers inside of LXD containers. And I do. By the way, for some reason the Collabora Office Docker setup for my Nextcloud installations wasn’t stable, but luckily, Collabora is now providing package installs, so I switched.

Nonetheless, I had to add two additional public IP addresses (I only had to use one for roundabout ten separate sites in the last two years!!) for other services using the same ports.

Which method to use in this setup?

After checking around, I found out that there are currently several different methods:

  1. Turning the network interface of the host into a bridge. This didn’t work at all, while it should. There was another thread on here that discussed this. I will add the link, when I find it (somewhat like this: They also weren’t lucky. What happened to me was the networking stopped completely. I couldn’t make any connection.

Here: Lxd + Netplan + Static IP's in same subnet HOW-TO

I have the same problem sdurnov is describing. No network on host.

  1. MACVlan setup. I didn’t try this, because it says that containers running on the host ip can’t talk to containers on the MACVlan IPs.

  2. LXD Proxies. I wanted to try this, since it seems like the LXD project prefers this method, but didn’t find good documentation. I expect some quality content from Simos Xenitellis in the coming months (his work is awesome, btw.) Mainly I wanted to forward port ranges, instead of single ports and while I found confirmation, that it has been implemented via Github, I did not find documentation on how to set it up.

  3. Simply adding Public IPs on the host’s network interface and using prerouting ipconfig rules, like I had before for my running HAProxy setup. I started out with this, but noticed that my containers on the first ip address from the host (it’s a /22 address, while the two additional IPs I got from my provider are /32, these are the instructions:ätzliche_IP_Adresse_konfigurieren I strongly suspect it’s similar to how OVH does it) couldn’t reach the containers on the additional public IPs using the prerouting rules. I suppose there would be a way to make this work with some additional routing rules.

  4. The method described above. Instead of using the method described by my provider and adding the IPs to the network interface, I can add the IPs to lxdbr0 and then add them inside the host. The only problem with this method for me is that “post-up” doesn’t work in Bionic, since it is on netplan. Thus, I change to static public IP and won’t use a private IPv4 at all, like you describe in this post. This is the reason I am putting up an answer to this particular post. Because it works fine with Netplan in Bionic.

Those are a lot of methods of routing IPs to containers. Do you have a preferred method? A best practice? Are all of those going to be supported in Focal? Will my setup with the line

still work like that in Focal?

Anyhow, thank you so much for all the useful stuff on here and all the great and useful software. :- :+1: :smiley:


In Bionic, you only need to edit /etc/netplan/50-cloud-init.yaml My, working, config looks like this, using your IPs. It looks like you can just add an IP to the DHCP4 setting and it will be added to the device.

    version: 2
            addresses: []
            dhcp4: true

I couldn’t find a working Netplan config for a static setup without the private IP like the interfaces. I took the advice from @bodleytunes below and went with routed networking. I will be making a post over there, because my setup is a little different.

Isn’t the routed networking mode applicable for this in recent versions?

I’ll have to admit that I’ve not tried it, but I skim read it some time ago and looked suitable for routing a public IP to a container. If you want could be container running a service or could be a haproxy forwarding inbound via http(s) or tcp (l4 proxy).

I tried, but the routed networking mode doesn’t work for me. I don’t know why. I am back with this.

I’m not sure what restrictions Digital Ocean place on their network, but some ISPs will prevent multiple MAC addresses appearing on the VPS network port. In these cases, the bridge and macvlan options are not possible because they make the container appear as another Ethernet device on the network.

When you want to have public IPs inside a container but still want to share the host’s MAC address with the wider network, then routed and ipvlan NIC types are suitable.

The ipvlan NIC type however does not allow the containers to communicate with the host (or vice versa) so may not be appropriate for your situation.

The routed NIC type type does allow this, and uses the same approach as LXD’s ipvlan implementation by using proxy ARP to make the container’s IP appear on the host’s external network, whilst sharing the host’s MAC address. It also configures the static routes on the host needed to route traffic arriving at the host into the container.

When firewalling on the host with routed NICs, container traffic will be processed via the FORWARD table rather than the INPUT or OUTPUT chains, whereas with ipvlan NICs traffic will hit the INPUT chains instead.

See 3.19 and Routed networking mode configuration example needed for a working routed NIC config, including netplan config.