I have a problem and after read a lot of articles I can’t resolve it. I hope can you help me
I want give to a single container a public static ip from my hosting provider (i have multiple public ips). this way, my container acts like a VM and is reacheable over internet. For this purpose, I created a bridge interface:
iface br0 inet static
up route add -net 188.8.131.52 netmask 255.255.255.192 gw 184.108.40.206 dev br0
iface eth0 inet manual
But I want that my default network still are the default “lxdbr0” and only set a public ip on-demand. The default lxd bridge works fine:
I’m tried attach the bridge to container:
lxc network attach br0 c1 eth0
lxc config device set c1 eth0 ipv4.address 220.127.116.11
Also tried set the ip directly into container. Maybe should create an other network with “lxc network create” but i don’t know how configure it to use the physical device and get the public ip.
what I am doing wrong? how can give the public ip to my container without change the others containers?
Finally i’ve created a second subnet for that container.
There is a way to recreate this configuration without enter in the container? change the cloud-init template don’t seems a good idea, because affects to the rest of containers…
Hi there, thank you for all your work. I have a very nice and useful setup running with Bionic, Snap LXD, and a bunch of web services separated by containers using a single ip described by Simos Xenitellis here:
I use prerouting defined in /etc/ufw/before.rules to point to the HAProxy container that is using Letsencrypt. This setup gives me maximum isolation via containers, while also providing dense packaging. And I don’t have to do that via docker containers, but I still have the option to do so, because I can run docker containers inside of LXD containers. And I do. By the way, for some reason the Collabora Office Docker setup for my Nextcloud installations wasn’t stable, but luckily, Collabora is now providing package installs, so I switched.
Nonetheless, I had to add two additional public IP addresses (I only had to use one for roundabout ten separate sites in the last two years!!) for other services using the same ports.
Which method to use in this setup?
After checking around, I found out that there are currently several different methods:
Turning the network interface of the host into a bridge. This didn’t work at all, while it should. There was another thread on here that discussed this. I will add the link, when I find it (somewhat like this: https://thomas-leister.de/en/lxd-use-public-interface/) They also weren’t lucky. What happened to me was the networking stopped completely. I couldn’t make any connection.
I have the same problem sdurnov is describing. No network on host.
MACVlan setup. I didn’t try this, because it says that containers running on the host ip can’t talk to containers on the MACVlan IPs.
LXD Proxies. I wanted to try this, since it seems like the LXD project prefers this method, but didn’t find good documentation. I expect some quality content from Simos Xenitellis in the coming months (his work is awesome, btw.) Mainly I wanted to forward port ranges, instead of single ports and while I found confirmation, that it has been implemented via Github, I did not find documentation on how to set it up.
Simply adding Public IPs on the host’s network interface and using prerouting ipconfig rules, like I had before for my running HAProxy setup. I started out with this, but noticed that my containers on the first ip address from the host (it’s a /22 address, while the two additional IPs I got from my provider are /32, these are the instructions: https://www.netcup-wiki.de/wiki/Zusätzliche_IP_Adresse_konfigurieren I strongly suspect it’s similar to how OVH does it) couldn’t reach the containers on the additional public IPs using the prerouting rules. I suppose there would be a way to make this work with some additional routing rules.
The method described above. Instead of using the method described by my provider and adding the IPs to the network interface, I can add the IPs to lxdbr0 and then add them inside the host. The only problem with this method for me is that “post-up” doesn’t work in Bionic, since it is on netplan. Thus, I change to static public IP and won’t use a private IPv4 at all, like you describe in this post. This is the reason I am putting up an answer to this particular post. Because it works fine with Netplan in Bionic.
Those are a lot of methods of routing IPs to containers. Do you have a preferred method? A best practice? Are all of those going to be supported in Focal? Will my setup with the line
still work like that in Focal?
Anyhow, thank you so much for all the useful stuff on here and all the great and useful software. :-
In Bionic, you only need to edit /etc/netplan/50-cloud-init.yaml My, working, config looks like this, using your IPs. It looks like you can just add an IP to the DHCP4 setting and it will be added to the device.
I couldn’t find a working Netplan config for a static setup without the private IP like the interfaces. I took the advice from @bodleytunes below and went with routed networking. I will be making a post over there, because my setup is a little different.
Isn’t the routed networking mode applicable for this in recent versions?
I’ll have to admit that I’ve not tried it, but I skim read it some time ago and looked suitable for routing a public IP to a container. If you want could be container running a service or could be a haproxy forwarding inbound via http(s) or tcp (l4 proxy).
I’m not sure what restrictions Digital Ocean place on their network, but some ISPs will prevent multiple MAC addresses appearing on the VPS network port. In these cases, the bridge and macvlan options are not possible because they make the container appear as another Ethernet device on the network.
When you want to have public IPs inside a container but still want to share the host’s MAC address with the wider network, then routed and ipvlan NIC types are suitable.
The ipvlan NIC type however does not allow the containers to communicate with the host (or vice versa) so may not be appropriate for your situation.
The routed NIC type type does allow this, and uses the same approach as LXD’s ipvlan implementation by using proxy ARP to make the container’s IP appear on the host’s external network, whilst sharing the host’s MAC address. It also configures the static routes on the host needed to route traffic arriving at the host into the container.
When firewalling on the host with routed NICs, container traffic will be processed via the FORWARD table rather than the INPUT or OUTPUT chains, whereas with ipvlan NICs traffic will hit the INPUT chains instead.